id
stringlengths
10
16
pid
stringlengths
12
18
input
stringlengths
2.75k
221k
output
stringlengths
500
9.79k
gao_GAO-20-308
gao_GAO-20-308_0
Background JOM Program Eligibility and Administration The JOM program provides supplementary financial assistance, through contracts, to meet the unique and specialized educational needs of eligible American Indian and Alaska Native students. Eligible students, under Interior’s regulations, are generally Indian students age 3 through grade 12 who are either a member of an Indian tribe or at least one- quarter degree Indian blood descendant of a member of an Indian tribe. BIE contracts with tribal organizations, Indian corporations, school districts, and states—which we collectively refer to as JOM contractors as that is the term used by Interior—that administer local JOM programs and disburse funds to schools or other programs providing JOM services. Most JOM funds are distributed through tribal contractors, according to BIE. BIE generally relies on BIA officials to disburse JOM funds, as noted previously (see fig. 1). BIE’s director is generally responsible for directing and managing JOM functions, including establishing policies and procedures, coordinating technical assistance, and approving the disbursement of JOM funds. In 2014, BIE established one centralized position dedicated solely to administering JOM as part of a broader re-structuring initiative and the position has been consistently staffed since 2018. The current JOM program specialist is responsible for planning, developing, administering, and coordinating the JOM program. It is the federal government’s policy to fulfill its trust responsibility for educating Indian children by working with tribes to ensure that education programs are of the highest quality. In 2016, Congress found in the Indian Trust Asset Reform Act that “through treaties, statutes, and historical relationship with Indian tribes, the United States has undertaken a unique trust responsibility to protect and support Indian tribes and Indians.” As further stated in the Act, the fiduciary responsibilities of the United States to Indians are also founded in part on specific commitments made in treaties and agreements, in exchange for which Indians surrendered claims to vast tracts of land. JOM Program Requirements and Implementation The JOM program is the only federally-funded Indian educational program that allows for student, parent, and community involvement in identifying and meeting the educational needs of American Indian and Alaska Native students, according to the National Johnson-O’Malley Association—a tribally led organization which advocates for JOM programs. The JOM regulations require prospective contractors to formulate an education plan in consultation with an Indian Education Committee, generally made up of parents of American Indian and Alaska Native students, and to submit the plan to BIE. Indian Education Committees have the authority to, among other things, participate fully in planning, developing, implementing, and evaluating their local JOM programs. According to BIE officials, JOM funds can be used to support a wide variety of supplemental education programs. For example, these funds support programs providing Native cultural and language enrichment; academic support; dropout prevention; and the purchase of school supplies, according to BIE (see fig. 2). JOM programs, particularly for students who are not living near tribal land, may be the only way students can access tribal language and cultural programs. According to BIE officials, JOM funding is primarily disbursed to contractors through three different funding mechanisms: self- determination contracts, self-governance compacts, and 477 plans. Most JOM contractors—over 200—are funded through self-determination contracts, according to data provided by BIE. These three funding mechanisms result in different oversight authority for Interior. However, the Johnson-O’Malley Supplemental Indian Education Program Modernization Act (Modernization Act)—enacted on December 31, 2018—requires all JOM contractors to submit annual reports to the Secretary of the Interior with the number of eligible Indian students during the previous fiscal year, an accounting of the amounts expended, and the purposes for which those amounts were expended. BIE officials said some contractors can also be subject to site visits to oversee the program. JOM Program Funding Under regulations, JOM funds are to be distributed to contractors by a formula that factors in the number of eligible students to be served and average per-student operating costs. Interior conducted its most recent official JOM student count in 1995. As a result, subsequent JOM distributions have been based on the number of students served by contractors in 1995—271,884 students. BIE officials said that the total number of eligible students has increased since 1995, although no official count has been completed. As a result, the funding contractors receive may not reflect changes in the number of students served by contractors. The size of JOM contracts currently ranges from less than $1,000 to nearly $4 million, according to data provided by BIE. The Modernization Act requires BIE to determine the number of eligible students served or potentially served and to complete a rulemaking process to, among other things, modernize program rules. BIE published a preliminary report on its initial determination of eligible students in October 2019 and is continuing to work on finalizing its count of eligible students. Additionally, in response to the Modernization Act, Interior promulgated new final JOM program regulations that became effective March 26, 2020. BIE Lacks Key Information on the JOM Program Needed for Oversight BIE Does Not Have a Complete and Accurate List of JOM Contractors BIE does not maintain a complete and accurate list of all JOM contractors. BIE officials said JOM funds are disbursed by awarding officials in various BIA offices in different locations, and there is no systematic process to identify and collect information on all the awarded contracts. BIE began efforts to identify all the contractors and the amount of their awards in May 2019 after we asked for this information. As of December 2019, BIE said they identified more than 340 contractors. BIE officials said they have not verified the accuracy and completeness of their current list of contractors. According to federal internal control standards, an agency should have relevant, reliable information to run and control its operations. BIE officials said their current list of JOM contractors is incomplete because some Interior officials responsible for administering and disbursing JOM funds did not respond to their requests for information. In addition, BIE officials said they may not have contacted all the relevant officials within Interior when they developed the list. BIE officials also said they do not know how many contractors may be missing from their list. Further, they said they did not validate the accuracy of the information they received on JOM contractors. Our analysis of BIE’s list of JOM contractors identified data reliability concerns. For example, we found 19 contractors that were listed twice, meaning the total number of contractors provided by BIE contained duplicates and was not an accurate count. BIE officials said that maintaining a complete list of contractors would be very helpful in their efforts to oversee and administer the JOM program, including allowing them to share program information more effectively with all contractors. For example, BIE did not inform all contractors about four consultation sessions it was holding in July 2019 on a proposed rule to change JOM regulations because BIE did not have contact information for all contractors, according to a BIE official. As a result, some contractors may have missed the opportunity to participate in the consultation sessions. Two JOM school contractors we interviewed told us they were not informed by BIE about the consultation sessions that took place in their state. These contractors said they had to create their own networks of contractors to inform each other about JOM-related developments and events because they cannot rely on communication from BIE. In addition, BIE officials said that a complete and accurate list of contractors would help them determine the number of eligible JOM students, as mandated by the Modernization Act. In the two previous efforts to update the count, BIE relied on contractors to submit the number of eligible students they serve. However, BIE officials acknowledged that the last effort to complete a count in 2014 failed, in part, because some contractors never received any communication that BIE was conducting a count. As a result, these contractors never submitted a count of students. Without a systematic process for maintaining a complete and accurate list of contractors, BIE may continue to face barriers administering the program. BIE Does Not Routinely Track the Timeliness of Payments to Contractors BIE does not have a process for tracking and monitoring the timeliness of JOM disbursements to contractors. According to BIE officials, the bureau does not establish a target date for disbursing funds to JOM contractors. JOM contractors and BIA and BIE officials we interviewed said the disbursements of JOM funds to some contractors are routinely provided later than expected based on contractors’ past experience. For example, 27 school contractors did not receive a portion of their calendar year 2018 funding until September 2019, according to the BIA official primarily responsible for disbursing the contractors their funds. Further, some of these contractors did not receive any disbursement in the 2019 calendar year until August, months after funds are typically disbursed. Delays in disbursing funds can hinder contractors’ ability to effectively manage their JOM programs and serve students. For example, the three JOM school contractors we interviewed told us that delays in disbursements have negatively affected their ability to plan their JOM activities because they do not know when they will receive their funding. The contractors also said their JOM programs are not as robust as they could be because they regularly delay spending and retain prior disbursements to use in the following year in anticipation of future delays in disbursements. Even with these carry-over funds, contractors said they have had to delay JOM programs for students due to late disbursements of funds, which negatively affect students who depend on JOM for educational support. We were unable to determine the full extent to which Interior disburses JOM funds in a timely manner because BIE and other Interior offices do not track and monitor the timeliness of JOM disbursements to contractors. Federal internal control standards state that agency management should design control activities to achieve objectives and respond to risks, such as by comparing actual performance to planned or expected results and analyzing significant differences. BIE, however, has not established target disbursement dates for contracts and therefore has no standard against which to measure the timeliness of disbursements. Furthermore, BIE does not systematically track the time between receiving its appropriation and the disbursement of contractor funds. BIE officials acknowledged that establishing a target date for disbursing funds to contractors and tracking progress in meeting that date could help ensure funds are provided in a timely manner. In an effort to monitor the disbursement of contractor funds, BIE officials said they have recently started to track the balance of JOM funds at each Education Resource Center. However, they acknowledged that tracking the balance of funds has limited usefulness in tracking the timeliness of disbursements because the information about fund balances does not include whether or not individual contractors have received their funds. BIE officials said having more detailed information on the disbursement of JOM funds would be helpful to ensure funds are provided in a timely manner. In addition, we recently reported that funds associated with self- determination contracts and self-governance compacts for tribes, which include JOM funds, are not always disbursed in a timely manner. We recommended that the Assistant Secretary of Indian Affairs should establish a process to track and monitor the disbursement of funds to tribes that are associated with self-determination contracts and self- governance compacts. However, this recommendation does not address all JOM contractors because non-tribal contractors are not eligible for self-determination contracts or self-governance compacts, and not all tribal contractors receive JOM funds through these mechanisms. Without also establishing a process for tracking and monitoring the disbursement of JOM funds through multiple funding mechanisms, BIE does not have reasonable assurance that funds will be disbursed in a timely manner. BIE Has Not Formally Assessed the JOM Information It Collects from Contractors or Updated Its Related Forms BIE has not formally assessed the usefulness of the information it has collected from JOM contractors for over 25 years. One contractor questioned whether the information was useful for the agency’s administration of the program because they never received any feedback or comments from BIE about the information they submitted. The contractor said they spent a considerable amount of time completing their annual report, which totaled over 60 pages and included information and signatures from over 40 different Indian Education Committees that oversee local JOM programs funded by the contract. In addition, all four contractors we interviewed that submitted annual reports said the information requested in the forms could be streamlined. For example, BIE’s annual report form asks each school or project site to report both the “number of eligible students actually served” and “the number of students actually served.” No instructions are provided to distinguish between the two populations, and the contractors said the reported number is identical since students must be eligible to be served by JOM. All four contractors we met with that said they submitted an annual report and renewal application also told us the information collection forms were burdensome to complete. For example, they said the forms were difficult to fill out, in part because they are not compatible with computer word processing programs, and as a result, responses have to be handwritten or completed with a typewriter. All of the forms BIE uses to collect information from contractors subject to JOM reporting requirements are also out of date. For example, the JOM renewal application form expired in 1993, meaning the Office of Management and Budget’s (OMB) approval to collect the information has lapsed. Agencies are required to submit all proposed information collections to OMB for approval. OMB reviews the proposal to assess the need for collecting the information and whether its collection minimizes burden on the public, among other things. Federal internal control standards also state that management should have a process to continually identify information requirements. In a 2015 presentation, BIE officials recognized the need to update the outdated forms to reflect technological developments and reduce the paperwork burden for contractors, but no revisions to the forms have been made. BIE officials said they plan to update the JOM application and reporting documents through the formal OMB review and approval process, but they do not have a timeline for doing so. We have previously reported that outdated forms may not be necessary or useful and may be an unnecessary burden on the public. Until BIE updates the forms, some contractors will continue to struggle to complete them. Further, by assessing the usefulness of the information they are collecting from JOM contractors, BIE may identify opportunities to both collect information that could improve program management and streamline information requests. BIE Has Not Developed JOM Training BIE has not provided or developed training for JOM contractors, according to BIE officials. National Johnson-O’Malley Association officials told us that BIE and BIA used to provide training that was helpful to JOM contractors on topics such as filling out annual reports and applications for JOM contracts, particularly to new staff managing these programs, but they no longer do so. A nonprofit organization for Indian education we interviewed also said JOM contractors need training on a range of issues, including how to complete JOM annual reports and other documentation, and on how to operate following implementation of the Modernization Act. According to the nonprofit organization, regular training on JOM is particularly important because certain aspects of the program, such as conducting annual assessments to determine the learning needs of Indian children served by the program, can be technically challenging. Officials from one tribal contractor we interviewed said the tribe provides its own training to school staff that implement local JOM programs on such topics as how to conduct Indian Education Committee meetings, how to fill out reimbursement claims, and how to organize and maintain financial records for program administrators and parents on Indian Education Committees. The contractor said that BIE training on topics, including how to conduct and how often to hold Indian Education Committee meetings, would be particularly helpful. Another tribal contractor we interviewed, which BIE data identified as receiving among the largest amount of JOM funds of all contractors, said that other contractors they interact with do not have sufficient program knowledge or resources to provide training and could benefit from BIE training. According to a BIE official, a former JOM Program Specialist, the need for training for JOM contractors is particularly important as there is frequent turnover among contractor staff responsible for administering programs. Officials from the nonprofit organization for Indian education also told us that high turnover rates among administrators of local JOM programs necessitates regular training for new staff. They added that more senior staff working on local JOM programs would also benefit from regular training because they may be implementing their programs inefficiently or ineffectively. BIE officials told us they have provided program updates and answered questions at conferences hosted by organizations representing JOM contractors. Not all contractors, however, are able to attend these conferences given their limited resources, according to three contractors we interviewed. Internal controls standards state that management should develop training based on the needs of individuals’ roles. BIE officials acknowledged that developing and providing training is needed, but they told us they are currently focused on other aspects of managing the JOM program and have not prioritized training. For example, the agency has set a goal in its strategic plan to develop a JOM program handbook by July 1, 2020. By providing training, BIE can ensure that contractors have the information they need to better serve their students. BIE Has Not Clearly Defined or Identified All the Roles and Responsibilities of BIE and Other Interior Staff Involved in Administering the Program BIE has not clearly defined roles and responsibilities or identified the staff necessary for conducting critical JOM functions. According to federal internal control standards, management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. BIE’s lack of defining or identifying roles and responsibilities related to administering contracts, reviewing the appropriateness of contract types, and conducting program oversight is described in the following bullets. Administering contracts. BIE did not identify staff to administer some contracts, which has contributed to some JOM programs affected by these contracts going unfunded. According to BIE and BIA officials, BIE did not assign any staff to administer at least 20 contracts in California, including helping contractors renew their contracts when they expired, typically after 3 years. As a result, these contracts—totaling over $300,000—expired and were not renewed, disrupting JOM services. A BIE official informed us there were lapses in administering these contracts because BIE closed the office responsible for administering them as a result of its reorganization which began in 2014, and never assigned anyone to assume responsibility for the contracts associated with that office. BIE has not assessed whether similar lapses in coverage may have occurred in other states or regions. BIE officials identified the unallocated funds from California in September 2019. In October 2019, BIE officials began efforts to identify and contact officials responsible for all the JOM programs whose contracts lapsed in California due to gaps in BIE’s administration of the program and began the process to start new JOM programs in the future. However, without identifying staff to administer all JOM contracts, problems with renewing and awarding contracts may persist. Reviewing the appropriateness of contract types. Interior’s Office of the Solicitor does not have a role in reviewing the issuance of new JOM contracts, according to a senior attorney in that office. The Office of the Solicitor’s lack of a role in reviewing JOM contracts increases the risk that contracts are not used appropriately. For example, we found that BIE has been using self-determination contracts to disburse JOM funds to non-tribal contractors, which is not authorized by the Indian Self Determination and Education Assistance Act. Under the Act, only Indian tribes and tribal organizations are eligible to enter into self-determination contracts; these contracts may not be used for non-tribal entities, such as school districts and states. The use of self- determination contracts for contractors that are not eligible to receive them can result in costs to the government. Self-determination contracts include provisions that would not otherwise be included in non-tribal JOM contracts, according to a senior attorney in the Office of the Solicitor. For example, self-determination contracts may include contract support costs and extend the Federal Tort Claims Act to tribal government employees administering the federal program(s) under these contracts. Therefore, school contractors that were disbursed JOM funds through self-determination contracts may have received contract support costs and legal protections they would not have been eligible to receive, according to the senior attorney. BIE officials told us that they have not determined how long self- determination contracts have been used to disburse JOM funds to non-tribal entities, how many non-tribal contractors were awarded these contracts, or whether the government has incurred costs as a result of using the wrong types of contracts. They said this information will be difficult to obtain because it is not systematically collected. After we found that BIE was using self-determination contracts to disburse JOM funds to school contractors, a senior attorney in the Office of the Solicitor said that her office would provide assistance as requested to BIE in transitioning these contracts to appropriate contracts. By systematically including the Office of the Solicitor in the process for reviewing JOM contracts, BIE can ensure that its contracts are the appropriate type and can minimize the risk of future inappropriate costs to the federal government. Conducting JOM oversight activities. BIE has not defined the roles and responsibilities related to overseeing JOM programs or identified staff dedicated to this function. For example, BIE has not identified staff at Education Resource Centers or other BIE offices with the capacity to conduct site visits and review JOM annual reports submitted by contractors. As a result, the bureau’s oversight of JOM contractors is done on an ad-hoc basis and sometimes not done at all, according to BIE officials. For example, in an internal memo addressed to BIE’s Director, a senior BIE official said that because the bureau has not identified staff with the capacity to conduct site visits, most Education Resource Centers have not conducted any site visits in at least 5 years. Officials from one tribal JOM contractor that said it is subject to BIE oversight told us that BIE has not conducted a site visit of their program in 10 years. They noted that BIE’s past site visits resulted in recommendations that improved their program activities and procedures and changed how they defined student eligibility. In addition, the head of an Education Resource Center said that JOM oversight activities are collateral duties that his staff do not have time to fulfill. Further, the responsibilities of officials who are charged with overseeing JOM programs have not been clearly defined. For example, BIE has not defined the responsibilities related to conducting site visits, such as what aspects of the program should be reviewed and which contractors should be selected for site visits. This lack of clearly defined responsibilities has resulted in inconsistencies in how officials are conducting oversight activities and potential gaps in coverage of contractors that are subject to oversight. BIE’s lack of oversight may also increase the risk of misuse and abuse of JOM funds. According to Interior’s Office of Inspector General, there have been three identified cases of theft related to the JOM program that occurred between 2004 and 2010. For example, a program coordinator of a JOM contract stole program funds as part of an embezzlement fraud scheme and was ordered to pay nearly $36,000 in restitution. By identifying staff who have the capacity to carry out oversight activities and clearly defining related responsibilities such as conducting site visits and reviewing JOM annual reports, BIE could provide support to contractors in improving their program activities and procedures and reduce the risk of potential fraud and abuse of JOM funds. Senior BIE officials acknowledged that they have not identified the staff necessary for conducting these critical JOM functions and, in November 2019, the Director of BIE approved hiring three additional JOM specialists. The core responsibilities of the new specialist positions, according to a knowledgeable BIE official, will be to support the administration of contracts, oversee contractors, and provide technical assistance. However, the exact roles and responsibilities for the new employees and the extent to which BIE staff in the Education Resource Centers will continue their role in providing programmatic support have not yet been determined. An official knowledgeable about the new JOM specialist positions added that defining the specific roles and responsibilities for these positions will be an iterative process in which BIE will assess the new staffs’ capacity to assume all the JOM responsibilities that are currently assigned to other staff. Until all the roles and responsibilities related to JOM program management have been identified and clearly defined, challenges in administering contracts, reviewing the appropriateness of contract types, and overseeing the program may persist. Conclusions American Indian and Alaska Native students have unique educational and cultural needs, which can include learning Native languages, cultures, and histories, and obtaining additional academic support. The JOM program is intended to address these needs that may not otherwise be provided through the public school system. BIE plays a critical role in administering the JOM program, which is central to the bureau’s mission of providing Indian students quality education opportunities starting in early childhood in accordance with a tribe’s needs for cultural and economic well-being. However, BIE lacks key JOM program information necessary for effective oversight, including complete information on which contractors are participating in JOM. BIE also has not assessed the usefulness of the information it collects from contractors, and relies on outdated forms to collect data. Without improved program data, BIE cannot effectively oversee the program. In addition, BIE does not provide training for JOM contractors. This lack of training may result in contractors misinterpreting JOM regulations and managing their programs inconsistently. Further, BIE has not clearly defined the roles and responsibilities of staff involved in administering the JOM program, which has resulted in gaps in program management and oversight. Until staff roles and responsibilities are clearly defined and identified, gaps in managing and overseeing the program may persist, resulting in an increased risk of potential misuse or abuse of JOM funds. Without taking steps to improve the management and oversight of the JOM program in these key areas, BIE cannot ensure that the program is truly serving the educational needs of eligible American Indian and Alaska Native students. Recommendations for Executive Action We are making the following five recommendations to Interior: The Director of the Bureau of Indian Education should develop a systematic process for identifying JOM contractors and maintaining an accurate and complete list of contractors and other relevant information about contractors, such as the amount of JOM funds they receive and their current points of contact. (Recommendation 1) The Director of the Bureau of Indian Education, in coordination with the Bureau of Indian Affairs as needed, should establish a process to track and monitor the timeliness of JOM disbursements to non-tribal contractors, including identifying a target date for disbursing funds to these contractors. (Recommendation 2) The Director of the Bureau of Indian Education should develop a timeline to assess the usefulness of the information they are collecting from JOM contractors and update JOM information collection forms, including converting them to an electronic format to reduce the burden on contractors to complete them. (Recommendation 3) The Director of the Bureau of Indian Education should develop and provide training to contractors on administering the JOM program. (Recommendation 4) The Director of the Bureau of Indian Education should clearly define the roles and responsibilities and identify the staff necessary for conducting critical JOM functions, including administering contracts, reviewing the appropriateness of contract types, and overseeing those contractors that are subject to BIE oversight. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to Interior for review and comment. We also provided relevant report sections to and requested technical comments from the National Indian Education Association and the National Johnson-O’Malley Association. In its comments reproduced in appendix I, Interior concurred with our five recommendations and described actions BIE and BIA plan to take to address them. In our draft report, we recommended that BIE needs to clearly define the roles and responsibilities and identify the staff necessary for conducting technical assistance, among other critical JOM functions. We removed reference to technical assistance from our report because, after we provided our draft report, Interior promulgated new, final JOM program regulations that include a new process for requesting and providing technical assistance. We did not receive any comments from the nonprofit organizations. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of the Interior Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Beth Sirois (Assistant Director), Brian Schwartz (Analyst-in-Charge), Ben DeYoung, and Alex Galuten made key contributions to this report. Additional assistance was provided by Edward Bodine, Gina M. Hoover, Thomas M. James, Grant M. Mallie, Sheila R. McCoy, Anna Maria Ortiz, Jeanette M. Soares, Joy K. Solmonson, Curtia O. Taylor, and William T. Woods.
Why GAO Did This Study American Indian and Alaska Native students enrolled in public schools have performed consistently below other students on national assessments from 2005-2019. The JOM program provides academic and cultural supports, through contracts, to meet the specialized and unique educational needs of American Indian and Alaska Native students enrolled in public schools and select private schools. In fiscal year 2019, Interior allocated about $23 million for the JOM program, according to Interior's budget documentation. GAO was asked to review issues related to Interior's JOM program, administered by BIE. This report examines the extent to which BIE (1) has key program information, (2) provides training to JOM contractors, and (3) clearly defines and identifies JOM roles and responsibilities. GAO reviewed relevant federal laws, regulations, and both BIE and JOM contractor documents; analyzed existing data and information on JOM; and interviewed agency officials, five JOM contractors of different types, and two nonprofit organizations selected for their knowledge of the JOM program. What GAO Found The Department of the Interior's (Interior) Bureau of Indian Education (BIE) does not have key information to manage the Johnson-O'Malley (JOM) program which provides supplemental education services to meet the specialized and unique needs of American Indian and Alaska Native students. For example, BIE does not maintain a complete and accurate list of all its JOM contractors, who provide services including targeted academic supports, Native language classes, and cultural activities. In May 2019, BIE began to identify all the contractors, but officials acknowledged that their list is still incomplete, and GAO found problems with the list, such as duplicate entries. Federal internal control standards state that an agency should have relevant, reliable information to run its operations. Maintaining a complete list of contractors would improve BIE's administration of the JOM program. BIE does not provide any training for JOM contractors. For example, BIE does not provide training to contractors on how to effectively manage their JOM programs or meet program requirements. By providing training for contractors, BIE could ensure that contractors understand the program and are equipped to provide services to meet the educational needs of their students. In addition, BIE has not clearly defined the roles and responsibilities or identified the staff needed to effectively administer the JOM program (see figure). For example, when BIE closed a field office in California, staff were not identified to administer the office's contracts, including helping contractors renew their contracts when they expired. Also, BIE has not identified a role for Interior's attorneys in reviewing the contracts and some contractors have types of contracts for which they are not eligible. Further, BIE has not identified staff to conduct consistent program oversight, which is important to mitigating the risk of misuse and abuse of JOM funds. Until all JOM roles and responsibilities have been defined and identified, challenges may persist. What GAO Recommends GAO is making five recommendations, including that the Director of BIE should maintain an accurate and complete list of JOM contractors, develop JOM training, and clearly define roles and responsibilities and identify staff for carrying out JOM functions. Interior agreed with the recommendations.
gao_GAO-19-487
gao_GAO-19-487_0
Background Headquartered in Washington, D.C., the Corps has eight divisions established generally according to watershed boundaries and 38 districts that carry out its Civil Works program. Corps headquarters primarily develops policies and provides agency oversight. The Assistant Secretary of the Army for Civil Works, appointed by the President, sets the strategic direction for the agency and has principal responsibility for the overall supervision of functions relating to the Civil Works program. The Chief of Engineers—a military officer—oversees the Corps’ civil works and military missions. The eight divisions—Great Lakes and Ohio River, Mississippi Valley, North Atlantic, Northwestern, Pacific Ocean, South Atlantic, South Pacific, and Southwestern—coordinate Civil Works projects in the districts within their respective divisions. Corps districts are responsible for planning, engineering, constructing, and managing Civil Works projects. Section 219 Program Overview and Funding Process Congress established the Section 219 program in the 1992 WRDA, which authorized the Corps to provide planning and design assistance to nonfederal sponsors in carrying out 18 environmental infrastructure projects, located in certain specified locations around the United States. For example, the 1992 WRDA authorized the Corps to provide assistance for a combined sewer overflow treatment facility for the city of Atlanta, Georgia. In subsequent acts, Congress authorized the Corps to provide construction assistance for Section 219 projects, in addition to planning and design, and significantly expanded the number of authorized projects. From 1992 through 2007, Congress authorized a total of 310 Section 219 projects, with the most recent and largest number of project authorizations occurring in 2007 (see table 1). For Section 219 projects, Congress specifies the geographic location (e.g., city, county), amount of authorized dollars, and purpose or scope of the project (e.g., development of drainage facilities to alleviate flooding problems). In general, Section 219 projects fall into one or more of the following types of projects: Drinking water treatment and distribution. Projects that build water treatment plants, water storage tanks, and water distribution lines. Wastewater treatment. Projects that build sewage treatment plants, wastewater collection systems, and facilities that purify treated wastewater for irrigation and other purposes. Stormwater management. Projects that help improve the management of storm sewers, eliminate or control sewer overflows, and address flooding. According to Corps data, of the 310 originally authorized Section 219 projects, 58 have been deauthorized and were no longer active, as of November 2018. The Corps is required by the 1986 WRDA, as amended, to annually identify all authorized projects that have not received obligations in the preceding 5 full fiscal years and submit that list to Congress. If funds are not obligated for planning, design, or construction of a project on that list during the next fiscal year, the project is deauthorized. The Secretary of the Army publishes a list of deauthorized projects in the Federal Register. Based on this process, the Corps considered deauthorizing 197 additional Section 219 projects in its fiscal year 2017 report to Congress. However, the 2018 WRDA provided that the projects identified for deauthorization in the Corps’ fiscal year 2017 report were generally not to be deauthorized unless they met certain additional requirements. The Corps allocates funding for Section 219 projects and other environmental infrastructure programs from the construction account. That account generally receives no-year appropriations through the Energy and Water Development Appropriations Act—meaning the appropriation remains available for obligation for an indefinite period of time. Prior to fiscal year 2012, the conference reports accompanying the annual Energy and Water Development Appropriations Acts generally listed individual Section 219 projects and specific allocations of funding for each project. However, since fiscal year 2012, Congress has not provided allocation direction for individual projects, but instead generally has designated an amount in reports and joint explanatory statements for environmental infrastructure overall, ranging from about $30 million to $55 million annually. According to Corps data, from fiscal years 1992 through 2017, the Corps expended over $440 million on Section 219 projects. Process for Managing Section 219 Projects Similar to other Civil Works projects, the Corps generally becomes involved in Section 219 projects when a nonfederal sponsor contacts the Corps for assistance on an authorized project. Corps districts gather additional information on the project from the nonfederal sponsor and determine if it is ready to be initiated. Once the Corps receives an appropriation from Congress, the agency decides whether to allocate funding to the project. If the project is selected to receive funding, it enters the preconstruction engineering and design phase. The purpose of this phase is to complete any additional planning studies and all of the detailed technical studies and designs—such as environmental impact studies—needed to begin construction. During this phase, the Corps also completes an environmental assessment of the proposed project. To initiate construction, the Corps and the nonfederal sponsor sign a project partnership agreement that specifies how the parties will collaborate, their respective roles and responsibilities, and the terms and conditions under which they will execute their responsibilities. The project partnership agreement typically requires the sponsor to provide without cost to the U.S. government all lands, easements, rights-of-way, relocations, and disposal areas necessary for the construction and subsequent maintenance of the project; maintain and operate the project after completion without cost to the provide cash or work-in-kind contributions to make the sponsor’s total contribution equal to 25 percent if the value of the sponsor’s land contribution does not equal or exceed 25 percent of the project cost. The Corps manages the construction phase, contracting out construction work to private engineering and construction contractors. Throughout the construction phase, the Corps oversees the contractors’ work, performing routine inspections to ensure it meets the Corps’ design and engineering specifications. During construction, the Corps, the nonfederal sponsor, and the private contractor typically appoint representatives to a project coordination team that meets regularly until the period of construction ends. Upon notification by the District Engineer that construction is complete, the nonfederal sponsor is responsible for operations and maintenance. Figure 1 shows the major steps in managing a Section 219 project. From Fiscal Years 2013 through 2017, the Corps Spent About $81 Million on 29 Section 219 Projects to Develop Drinking Water, Wastewater, and Stormwater Infrastructure The Corps expended about $81 million on 29 Section 219 projects from fiscal years 2013 through 2017, which included various types of projects such as drinking water treatment and distribution, wastewater treatment, and stormwater management. Examples of these projects include the following: Drinking Water Treatment and Distribution. The Corps manages a Section 219 project that includes the development of water desalination infrastructure in various sections of the South Perris community, located east of Los Angeles, California. In general, the South Perris area relies on a mixture of groundwater and water imported from different sources, including the Colorado River. According to the Corps’ environmental assessment, various factors, such as drought, caused the community to supplement its drinking water supply through increased use of groundwater; however, the groundwater in the area historically contained high salt content. Since the project’s authorization in fiscal year 2001 through fiscal year 2017, the Corps has helped construct groundwater wells and pipelines, which connect to drinking water treatment facilities that reduce the amount of salt in the water (see fig. 2). According to the nonfederal sponsor for the South Perris project, the overall project has provided benefits such as creating a local potable water source to meet anticipated population growth and reducing the community’s dependence on imported water. Wastewater Treatment. The Corps manages a Section 219 project that includes the rehabilitation of sewer lines within the metropolitan area of St. Louis, Missouri. The city’s wastewater system dates back to the 1800s and lacks the capacity to handle large flows. From the project’s authorization in fiscal year 1999 through fiscal year 2017, the Corps has assisted the community, among other things, in sewer rehabilitation of deep tunnels. According to documentation from the Corps’ St. Louis District, the rehabilitation of sewers is important in protecting the health and safety of the public, given the risk of untreated sewage being discharged into the environment. Stormwater Management. The Corps manages a Section 219 project that involves the development of stormwater infrastructure, among other things, across a five-county region (Calumet region) in northern Indiana. For example, flooding is a widespread problem in the region and it has affected commercial corridors, including within Gary, Indiana. From the project’s authorization in fiscal year 1999 through fiscal year 2017, the Corps has been assisting the region with measures to alleviate flooding, such as constructing stormwater storage areas under the street (see fig. 2). According to a nonfederal sponsor we interviewed, the Corps’ efforts in the Calumet region have offered benefits to local communities by, among other things, improving storm drainage in an area that experienced flooding during heavy rainfall. The 29 Section 219 projects with expenditures from fiscal years 2013 through 2017 were located in different parts of the country and managed by six Corps divisions, although the majority of the projects were under the South Pacific Division (10 of the 29 projects) and Great Lakes and Ohio River Division (eight of the 29 projects). The five states with the largest number of projects during this period were California, with nine Section 219 projects; Virginia, with three Section 219 projects; and Michigan, Pennsylvania, and Mississippi, each with two Section 219 projects. These projects varied in terms of the geographic area covered, such as a city, county, or region (e.g., multiple counties). Based on the project descriptions we reviewed, 10 of the projects focused on the environmental infrastructure needs of cities, nine focused on counties and 10 on regions. Projects that cover a broad geographic area, such as those at the county or regional level, generally consist of different types of subprojects. For example, the Cook County, Illinois Section 219 project included several subprojects, such as the construction of water mains and sewer improvements in different areas across the county. Most of the Section 219 projects (24 of the 29 projects) were authorized in 2000 or earlier and were ongoing as of November 2018. Only one of the 29 projects was completed; the project in St. Croix Falls, Wisconsin, was completed in fiscal year 2014. For the St. Croix Falls project, the Corps assisted with improvements to a wastewater treatment plant, such as installing equipment to screen out large solids that otherwise would be released into the St. Croix River. Of the 28 remaining projects that were ongoing, as of November 2018, 17 were in the construction phase, and 11 were in the preconstruction engineering and design phase. Table 2 summarizes information on the 29 projects with expenditures from fiscal years 2013 through 2017 by division and district. See appendix I for additional information on each project, including a detailed description and the total amount of expenditures from fiscal years 2013 through 2017. As previously noted, the Corps spent about $81 million on these 29 Section 219 projects from fiscal years 2013 through 2017. During that period, expenditures by fiscal year ranged from about $11 million to $22 million. Divisions with the largest percentage of overall expenditures from fiscal years 2013 through 2017 were the South Atlantic Division (36 percent) and Mississippi Valley Division (25 percent). The divisions with the smallest percentage of overall expenditures during the period were the North Atlantic Division (less than 1 percent) and Southwestern Division (4 percent). Table 3 summarizes overall expenditures from fiscal years 2013 through 2017 by division and fiscal year. Of the 29 projects with expenditures from fiscal years 2013 through 2017, 15 projects expended less than $1 million each, representing a total of $2.3 million. The majority of these projects (10 of the 15 projects) were in the preconstruction engineering and design phase. For example, as part of the Cambria, California, project, the Corps expended about $244,000 on preconstruction engineering and design activities, such as evaluating the environmental impacts of constructing a seawater desalination facility. In addition, for the Cumberland County, Tennessee, project, the Corps expended about $261,000 on planning and design for water supply projects. In comparison, 14 of the 29 projects expended more than $1 million each over the same time period, representing a total of $78.2 million. In particular, the Corps spent over half of the funding during this time period on four projects: Calumet Region in Indiana; DeSoto County, Mississippi; Jackson County, Mississippi; and Lakes Marion and Moultrie in South Carolina (see fig. 3). These projects generally consisted of multiple subprojects and covered a wide geographic area. For example, the Calumet Region project has involved over 25 subprojects since its authorization in fiscal year 1999 through August 2018. Through these subprojects, the Corps has managed various activities, including replacing drinking water lines, improving wastewater treatment plants, and installing stormwater infrastructure in a number of cities across Indiana. Additionally, the Lakes Marion and Moultrie project in South Carolina has included a range of subprojects, such as construction of a water treatment plant, construction of a water tower, and installation of water transmission lines across six counties. The Corps Generally Follows Its Standard Budget Prioritization Process for Section 219 Projects but Does Not Use Written Criteria to Rank Projects for Funding The Corps generally follows its standard budget process for prioritizing funding for the Section 219 program. This process involves ranking Section 219 projects for funding by all three levels of the Corps’ organization—districts, divisions, and headquarters. District officials identify Section 219 projects, including subprojects, and other environmental infrastructure projects for potential funding; enter a numerical ranking for each project in the Civil Works Integrated Funding Database; and submit the information to the division through the database. Division officials receive the rankings from each of the multiple districts in the division. Division officials then re-rank the Section 219 and other environmental infrastructure projects from all of their districts against one another. Division officials enter the numerical ranking for all projects across all their districts into the Civil Works Integrated Funding Database and submit the information to headquarters through the database. Headquarters officials receive the rankings from each division. They re-rank the projects from all divisions against each other to generate the final nationwide rankings. Based on the final rankings, not all Section 219 and other environmental infrastructure projects that the divisions submitted will receive funding. Headquarters officials then determine a funding amount for each Section 219 project selected to receive funding and publish these decisions in the agency’s annual work plan. After headquarters publishes the annual work plan, headquarters officials begin to allocate funding to Section 219 projects. However, the Corps does not have written criteria to guide the ranking of Section 219 projects, in contrast to other types of projects. Specifically, in our December 2018 report, we found that the Corps uses written criteria—such as the rate of economic return, populations at risk, and economic impact—to prioritize funding for core mission areas, such as flood risk management and navigation projects. While Corps budget guidance indicates the criteria each core mission area should use in the ranking process, it does not specify criteria for Section 219 or other environmental infrastructure projects. In the absence of written criteria, Corps officials use their discretion on how to rank Section 219 projects for funding, according to Corps headquarters officials. When ranking Section 219 projects for funding, officials in each of the districts we interviewed generally consider whether Section 219 projects can be completed within the fiscal year. However, we found that the districts vary in terms of whether other factors are considered and what those factors are. Specifically, One district considers the level of congressional support and the potential public health impacts of the project. Another district considers the level of congressional support and the dollar value of the project. A third district only considers whether the project can be completed within the fiscal year. At the division level, officials we interviewed stated that they consider, among other things, the level of congressional support for the projects; however, to a large extent they rely on the rankings provided by their respective districts. Headquarters officials said that they primarily focus on ensuring that projects are geographically dispersed across the divisions when assigning final rankings for Section 219 projects. In recent years, congressional direction has indicated that the Corps, when allocating funding, is to consider giving priority for environmental infrastructure projects that have certain characteristics. For example, the Joint Explanatory Statement accompanying the Consolidated Appropriations Act in 2017 directed the Corps to consider characteristics such as projects: with the greater economic impact; in rural communities; in communities with significant shoreline and instances of runoff; in or that benefit counties or parishes with high poverty rates; and in financially distressed municipalities. Corps headquarters, division, and district officials we interviewed said that while they are generally aware of this congressional direction, they do not use it to guide the Section 219 ranking process. According to a division official, written criteria would be helpful for ranking projects across multiple districts and would clarify procedures for new staff. Officials we interviewed in the three districts said, in general, written criteria would clarify the ranking process. For example, one Corps district official stated that written criteria would provide standardization to the ranking process, ensuring that each district is focused on the highest priorities of the agency. According to Corps headquarters officials, although they see value in having written criteria to prioritize Section 219 funding, they have not developed such criteria because the agency considers Section 219 projects to be outside the agency’s core mission areas, such as flood control. According to a 2008 Corps report to Congress, “Funds provided to the Corps for wastewater treatment and municipal and industrial water supply projects necessarily reduce the amount of funds that instead could be used for the primary mission areas of the Corps. Thus, provision of Civil Works funding for these environmental infrastructure programs negatively affects the Corps’ ability to meet critical mission needs…such as restoring nationally significant ecosystems.” Headquarters officials confirmed that this report accurately reflects the agency’s current position. Corps officials also stated that developing written criteria has not been a priority because Section 219 projects represent a small percentage of the agency’s overall Civil Works budget. Federal standards for internal control states that agencies should use quality information to achieve their objectives by identifying information requirements. The federal standards also call for agencies to design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals. By establishing written criteria, the Corps would have greater assurance that its project selections align with a clear set of priorities, such as the characteristics identified in recent congressional direction for the agency to consider when selecting Section 219 projects for funding. Conclusions Since the inception of the Section 219 program in 1992, the Corps has spent over $440 million on water infrastructure projects across its divisions and districts. However, the Corps has not developed written criteria for ranking Section 219 projects for funding as it has for other Civil Works programs within the agency’s core mission areas. Consequently, officials at the district, division, and headquarters levels are using their discretion regarding which factors to consider in ranking Section 219 projects for funding. Further, Congress has provided direction to the Corps on which characteristics to consider in prioritizing Section 219 funding; however, Corps officials stated that they do not use it to guide their ranking of Section 219 projects. By establishing written criteria, the Corps would have greater assurance that its project selections align with a clear set of priorities, such as the characteristics identified in recent congressional direction for the agency to consider when selecting Section 219 projects for funding. Recommendation for Executive Action The Secretary of the Army should direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to develop written criteria for ranking Section 219 projects for funding, taking into account a clear set of priorities, such as those identified by recent congressional direction. Agency Comments We provided a draft of this report to the Department of Defense for review and comment. In its written comments, reprinted in appendix II, the department concurred with our recommendation and described the actions they plan to take. Specifically, the Corps will develop and document a more rigorous set of priorities in line with those identified by recent Congressional direction. The department also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Description of U.S. Army Corps of Engineers Section 219 Projects and Expenditures from Fiscal Years 2013 through 2017 Appendix I: Description of U.S. Army Corps of Engineers Section 219 Projects and Expenditures from Fiscal Years 2013 through 2017 Project description as authorized by statute Water-related environmental infrastructure, Allegheny County, Pennsylvania. A combined sewer overflow treatment facility for the city of Atlanta, Georgia and watershed restoration and development in the regional Atlanta watershed including Big Creek and Rock Creek. Water-related infrastructure for the parishes of East Baton Rouge, Ascension, and Livingston, Louisiana. Water-related infrastructure projects in the counties of Benton, Jasper, Lake, Newton, and Porter, Indiana. Desalination infrastructure, Cambria, California. Water and wastewater infrastructure for the Contra Costa Water District, California. Water-related infrastructure and resource protection and development, Cook County, Illinois. Water supply projects in Cumberland County, Tennessee. Desert Hot Springs, California Resource protection and wastewater infrastructure, Desert Hot Springs, California. Wastewater treatment project in the county of DeSoto, Mississippi. Water supply and wastewater infrastructure projects in the counties of Accomack, Northampton, Lee, Norton, Wise, Scott, Russell, Dickenson, Buchanan, and Tazewell, Virginia. Water-related infrastructure and resource protection, including stormwater management, and development, El Paso County, Texas. Wastewater infrastructure assistance to reduce or eliminate sewer overflows, Genesee County, Michigan. Industrial water reuse project for the Harbor/South Bay area, California. Water infrastructure, Inglewood, California. Provision of an alternative water supply for Jackson County, Mississippi. Wastewater treatment and water supply treatment and distribution projects in the counties of Berkeley, Calhoun, Clarendon, Colleton, Dorchester, and Orangeburg, South Carolina. A project to provide water facilities for the Fox Field Industrial Corridor, Lancaster, California. Alleviation of combined sewer overflows for Lynchburg, Virginia, in accordance with combined sewer overflow control plans adopted by, and currently being implemented by, the non-Federal sponsor. Water-related infrastructure in the counties of Lackawanna, Lycoming, Susquehanna, Wyoming, Pike, Wayne, Sullivan, Bradford, and Monroe, Pennsylvania, including assistance for the Montoursville Regional Sewer Authority, Lycoming County, Pennsylvania. Project description as authorized by statute Water and wastewater infrastructure in Hancock, Ohio, Marshall, Wetzel, Tyler, Pleasants, Wood, Doddridge, Monongalia, Marion, Harrison, Taylor, Barbour, Preston, Tucker, Mineral, Grant, Gilmer, Brooke, and Ritchie Counties, West Virginia. A project to eliminate or control combined sewer overflows in the cities of Berkley, Ferndale, Madison Heights, Royal Oak, Birmingham, Hazel Park, Oak Park, Southfield, Clawson, Huntington Woods, Pleasant Ridge, and Troy, and the village of Beverly Hills, and the Charter Township of Royal Oak, Michigan. Recycled water transmission infrastructure, Eastern Municipal Water District, Perris, California. Alleviation of combined sewer overflows for Richmond, Virginia, in accordance with combined sewer overflow control plans adopted by, and currently being implemented by, the non-federal sponsor. San Ramon Valley, California A project for recycled water for San Ramon Valley, California. Water supply desalination infrastructure, South Perris, California. Wastewater infrastructure, St. Croix Falls, Wisconsin. Projects to eliminate or control combined sewer overflows in the city of St. Louis and St. Louis County, Missouri. Total Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Anne-Marie Fennell, (202) 512-3841 or fennella@gao.gov. Staff Acknowledgments In addition to the contact named above, Vondalee R. Hunt (Assistant Director), Anthony C. Fernandez (Analyst-In-Charge), Patricia Moye, Gloria Ross, and Sheryl Stein made significant contributions to this report. Important contributions were also made by Patricia Donahue, Tim Guinane, Susan Murphy, Sara Sullivan, Kiki Theodoropoulos, and Walter Vance.
Why GAO Did This Study Under Section 219 of the 1992 Water Resources Development Act, as amended, Congress authorized the Corps to provide assistance for the design and construction of environmental infrastructure projects, known as Section 219 projects. Such projects include the development of water transmission lines. Congress typically provides a lump sum appropriation for the Corps' construction account, out of which Section 219 and other environmental infrastructure projects are funded. GAO was asked to review projects carried out under the Section 219 program. This report examines (1) the number and type of Section 219 projects and expenditures from fiscal years 2013 through 2017, and (2) how the Corps prioritizes funding for Section 219 projects. GAO reviewed relevant federal laws and agency guidance; analyzed agency data for fiscal years 2013 through 2017, the most recent time period for which data were available; and interviewed agency officials at headquarters, three divisions, and three districts–selected based on geographic distribution and the amount of Section 219 project expenditures. What GAO Found From fiscal years 2013 through 2017, the most recent available data, the U.S. Army Corps of Engineers (Corps) spent approximately $81 million on 29 Section 219 projects to develop drinking water, wastewater, and stormwater infrastructure. For example, through the St. Croix Falls, Wisconsin Section 219 project, the Corps assisted with improvements to a wastewater treatment plant. Of the 29 projects, the Corps spent over half of the funding during this period on four projects: (1) Calumet Region, Indiana; (2) Desoto County, Mississippi; (3) Jackson County, Mississippi; and (4) Lakes Marion and Moultrie, South Carolina. The Corps generally follows its standard budget prioritization process—which involves districts, divisions, and headquarters ranking each project and headquarters making final funding decisions—to prioritize Section 219 funding. However, the Corps has not developed criteria to guide this process. GAO found the Corps varies in the factors it uses to rank Section 219 projects. For example, one district considers whether a project can be completed within the fiscal year, while another considers the level of congressional support and dollar value of the project. Headquarters officials said the agency views Section 219 projects as outside its core mission areas and therefore has not developed written criteria. Congressional direction has indicated that the Corps is to consider characteristics—such as projects with the greater economic impact—in prioritizing Section 219 project funding. While aware of this direction, Corps officials said they do not consider it when ranking projects. Federal standards for internal control states that agencies should use quality information to achieve their objectives. By establishing written criteria, the Corps would have greater assurance that its Section 219 project selections align with a clear set of priorities, such as those identified by recent congressional direction. What GAO Recommends GAO recommends that the Corps develop written criteria for ranking Section 219 projects for funding, taking into account a clear set of priorities, such as those identified by recent congressional direction. The agency concurred with the recommendation.
gao_GAO-20-655T
gao_GAO-20-655T_0
Background In the United States, the roles and responsibilities related to preparing for, assessing, and responding to communicable disease threats in the civil aviation system require immense coordination among a number of federal agencies and aviation stakeholders. Each federal agency has a different mission, which affects its responsibilities for protecting against communicable disease threats. The DHS and HHS are the lead agencies for responding to a communicable disease threat. They focus on protecting our borders at ports of entry, including airports, from threats from abroad and protecting the nation from domestic and foreign health, safety, and security threats, respectively. FAA is responsible for civil aviation and commercial space transportation flight safety in the United States and the safe and efficient movement of air traffic in the national airspace system, as well as for the safety of U.S. airlines, other U.S. operators, and FAA-certificated aircrews worldwide. As part of this responsibility, FAA regulates and certificates airports, airlines, and airmen and provides guidance. In the case of a communicable disease threat, numerous federal, state, and local entities may be called upon to respond, depending on their legal authority and whether the threat is identified before, during, or after the flight. For example, before boarding, HHS and DHS may identify travelers who are not allowed travel, based on public health threats. The CDC can prohibit the introduction of nonresident foreign nationals into the United States from designated countries or places, but only for such time as the CDC deems necessary for public health. During a flight, CDC regulations require pilots to immediately report to CDC any deaths or the occurrence of any travelers with signs or symptoms that may indicate a communicable disease infection during international flights coming to the United States. And, once an aircraft with a suspected ill passenger approaches an airport, federal or local public health officials, first responders (e.g., fire or emergency medical technicians), airport authorities, air traffic control personnel, or a combination of these stakeholders may make decisions about and lead certain aspects of the response based on the situation and available response protocols or preparedness plans. In addition, some response-related roles and responsibilities are established in law or by interagency agreements, and others may be defined in FAA-required airport-emergency plans, although those plans are not required to address communicable disease threats. In addition, FAA supports and coordinates a range of R&D activities for the civil aviation system. The inventory of FAA’s R&D activities is expressed in the National Aviation Research Plan (NARP) and in FAA’s Fiscal Year R&D Annual Review. FAA is required to submit both of these documents annually to Congress. According to FAA’s most recent NARP, FAA’s research budget from all accounts in FY 2017 was $422.3 million. FAA’s research budget supports activities conducted by FAA as well as a range of partners, including other government agencies, universities, and private sector organizations. FAA’s process for developing its commercial aviation research portfolio spans the agency. To develop the NARP and its R&D portfolio, FAA’s program planning teams, which focus on specific research program areas, identify R&D projects to meet one of DOT’s three strategic goals and FAA’s five R&D goals. Further, an executive board in FAA provides guidance and oversight over the agency’s portfolio development process, and a statutorily created advisory committee—consisting of individuals that represent corporations, universities, associations, and others— conducts external reviews of FAA’s R&D programs for relevance, quality, and performance. This advisory committee also makes recommendations to FAA on the proposed R&D portfolios and budgets. In the Continued Absence of a Comprehensive National Plan, the U.S. Aviation System Remains Insufficiently Prepared to Respond to Communicable Disease Threats In 2015, we found that the United States lacked a comprehensive national aviation-preparedness plan to limit the spread of communicable diseases through air travel, though some individual airport and airline preparedness plans did exist. Accordingly, we recommended that DOT work with relevant stakeholders, such as HHS, to develop a national aviation- preparedness plan for communicable disease outbreaks. We emphasized that a comprehensive national plan would provide a coordination mechanism for the public-health and aviation sectors to more effectively prevent and control a communicable disease threat while also minimizing unnecessary disruptions to the national aviation system. Additionally, U.S. airports and airlines are not required to have individual preparedness plans for communicable disease threats and no federal agency tracks which airports and airlines have them. As such, the extent to which U.S. airports and airlines have such plans is unknown. However, all 14 airports and 3 airlines we reviewed in 2015 had independently developed preparedness plans for responding to communicable disease threats from abroad. These plans generally addressed the high-level components that we identified as common among applicable federal and international guidance for emergency preparedness, such as establishment of an incident command center and activation triggers for a response. While the 14 airports and 3 airlines had plans that address communicable diseases, representatives from these airports and airlines reported facing multiple challenges in responding to threats. Identified challenges that included obtaining guidance; communication and coordination among responders; and assuring employees have appropriate training, equipment, and sanitary workplaces. As we stated in our 2015 report, a national aviation preparedness plan to respond to communicable disease outbreaks could help address these challenges. As of June 2020, DOT, DHS, and HHS stated that the federal government still has not developed a national aviation-preparedness plan to respond to communicable disease outbreaks. In making our recommendation in 2015, we pointed to Annex 9 to the Chicago Convention—an international aviation treaty to which the United States is a signatory—which contains a standard that obligates International Civil Aviation Organization (ICAO) member states to develop a national aviation-preparedness plan for communicable disease outbreaks. DOT and CDC officials in 2015 stated that some elements of a national aviation-preparedness plan already exist, including plans at individual airports. However, as we discussed in our 2015 report, individual airport plans are often contained in multiple documents, and FAA reported that the plans are intended to handle communicable disease threats posed by passengers on one or two flights, rather than an epidemic—which may require involvement from multiple airports on a national level. Most importantly, a national aviation- preparedness plan would provide airports and airlines with an adaptable and scalable framework with which to align their individual plans, to help ensure that individual airport and airline plans work in concert with one another. DOT and CDC officials agreed in 2015 and continue to agree today that a national aviation-preparedness plan could add value. DOT, however, maintains that those agencies that have both legal authority and expertise for emergency response and public health—namely DHS and HHS—are best positioned to take the lead role in developing such a plan within the existing interagency framework for national-level all-hazards emergency preparedness planning. We continue to believe that DOT would be in the best position to lead the effort because FAA and DOT have stronger and deeper ties to, as well as oversight responsibility for, the relevant stakeholders that would be most involved in such a broad effort, namely airlines, airports, and other aviation stakeholders. In addition, DOT’s Office of the Secretary is the liaison to ICAO for Annex 9 to the Chicago Convention, in which the relevant ICAO standard is contained. In response to the current COVID-19 pandemic and in the absence of a national aviation-preparedness plan, DOT officials pointed to ongoing efforts to engage with interagency partners at DHS and HHS, as well as industry stakeholders, to better collaborate on the aviation sector’s communicable disease response and preparedness. For example, DOT told us that it has facilitated conference calls between federal and private sector stakeholders and has collaborated with CDC to update interim guidance for airline crews related to communicable diseases, specifically COVID-19. While these actions are helpful, some aviation stakeholders have publicly highlighted piecemeal response efforts that may have led to some of the confusion among stakeholders and chaos at certain airports that occurred earlier this year following the COVID-19 travel bans and increased screening efforts. For example, stakeholders described actions taken by individual airlines in the absence of FAA guidance, such as to cease operations to certain countries, and a piecemeal approach to establishing standards for safely continuing or expanding service, such as various airline and airport policies regarding facemasks. This piecemeal approach points to the continued need for DOT to implement our 2015 recommendation to develop a coordinated effort to plan for and respond to communicable disease threats. We have included this open recommendation as one of 16 high priority recommendations to DOT. FAA Has Taken Steps to Improve Its R&D Portfolio Management, but Has Conducted Limited Research on Disease Transmission in Aircraft and Airports FAA is Taking Steps to Improve the Formulation and Management of its R&D Portfolio Based on GAO Recommendations While a national aviation-preparedness plan can help better manage the response to the next aviation pandemic, other efforts such as research and development are also key. In 2017, we found that FAA’s actions related to the management of its R&D portfolio were not fully consistent with statutory requirements, agency guidance, and leading practices. As part of that work, we assessed FAA’s actions to manage its R&D portfolio in three key areas: (1) developing its portfolio of R&D projects, (2) tracking and evaluating those projects, and (3) reporting on its portfolio. We found that FAA could be more strategic in how it develops its R&D portfolio, chiefly in identifying long-term research needs and in improving disclosure of how projects are selected. As a result of these deficiencies, we found that FAA management could not be assured that the highest priority R&D was being conducted. We also found that while FAA tracks and evaluates its research projects consistent with leading practices, it did not fully address all statutory reporting requirements, such as identifying long-term research resources in the National Aviation Research Plan (NARP) or preparing the R&D Annual Review in accordance with government performance-reporting requirements. These reporting deficiencies can limit the usefulness of the reports to internal and outside stakeholders. Accordingly, in 2017, we recommended that DOT direct FAA to (1) take a more strategic approach to identifying long- term R&D research priorities across the agency, (2) disclose how research projects are prioritized and selected, and (3) ensure that the NARP and R&D Annual Reviews meet statutory requirements for content. DOT agreed with all three recommendations. As of June 2020, FAA has fully addressed one of our recommendations and taken partial action on two other recommendations. Specifically, FAA fully responded to our recommendation that FAA disclose the process it uses for prioritizing and selecting research projects by updating in 2018 its internal guidance documents to allow better transparency over project selection. In partially responding to our recommendation to take a more strategic approach to identifying research priorities across the agency, in June 2019, FAA issued a redesigned National Aviation Research Plan (NARP) for 2017-2018. The redesigned plan is a good first step. Also as part of an effort to be more strategic, FAA is beginning to take actions to understand emerging aviation issues requiring FAA’s research attention. This recommendation has not been fully addressed as, according to FAA officials, the agency is still developing guidance to ensure that future NARPs take a strategic approach and incorporate emerging issues into future plans. FAA officials told us they plan to finalize the guidance by the end of 2020. Similarly, with respect to our recommendation aimed at achieving compliance with statutory reporting requirements, the redesigned 2017-2018 NARP included a list of agreements with federal and nonfederal entities on research activities, resource allocation decisions, and a description of technology transfer to government, industry, and academia, among other items. Officials told us that they are finalizing the 2019 R&D Annual Review, which has been redesigned to address other statutory reporting requirements, and will develop guidance to ensure that future documents meet those requirements. Disease Transmission Research Has Received Limited FAA Focus in Recent Years FAA has sponsored limited federal research into disease transmission onboard aircraft and in airports. FAA’s research goals focus on areas like improving airport operations and air space management, and developing new technologies, which FAA has aligned to DOT’s strategic goals related to safety, infrastructure, and innovation. Based on our prior work and interviews with FAA officials, we found that FAA’s research in cabin safety for crew and passengers does not focus on disease transmission. For example, according to FAA officials, as of June 2020, ongoing research that most closely relates to disease contamination is research related to monitoring the quality of “bleed air,” which is outside air that is drawn through jet engines into an aircraft cabin. FAA officials said that its Civil Aerospace Medical Institute is participating in this research. Even so, FAA has funded some programs that are relevant to mitigating communicable disease transmission at airports and on aircraft. For example, in 2015 the Transportation Research Board’s Airports Cooperative Research Program (ACRP), which is funded by FAA’s Airport Improvement Program (AIP), decided to hold a series of workshops on topics that are of significance to airports and that are not being addressed by other federal research programs. The decision to hold the first ACRP workshop on communicable disease occurred toward the end of the Ebola virus outbreak. ACRP has also issued reports on reducing communicable disease transmission at airports and aircraft. These reports have provided information and guidance to airports and airlines on infectious disease mitigation onboard aircraft and ways to respond to a communicable disease in airports. For example, a 2013 ACRP report recommends reducing the amount of time aircraft ventilation systems are shutdown at the gate, so that an aircraft’s high efficiency particulate air (HEPA) systems, which can capture more than 99 percent of the airborne microbes, continue to operate. ACRP also has a research project currently under way for publication early next year on effective collaboration to prevent, respond to, and mitigate disease threats. Prior to 2014, FAA also funded some research on disease transmission on aircraft through its Centers of Excellence research consortium. Specifically, in 2004, FAA established the Airliner Cabin Environment Research (ACER) Center of Excellence, which conducts research on, among other things, the safety and health of passengers and crew inside the cabin. In 2010 and 2012, ACER published research on air quality in airline cabins and disease transmission in aircraft. A researcher we interviewed who is affiliated with ACER said that the Center established a laboratory in 2006, called ACERL, which is currently conducting research on the dispersion of airborne particles (including viruses) in the aircraft cabin for CDC’s National Institute of Occupational Safety and Health. As of 2014, ACER began operating independently as a consortium academia, government, and others and is no longer being funded solely by FAA. FAA and DOT principally look to HHS and the CDC for guidance on passenger health issues. HHS has statutory responsibility for preventing the introduction, transmission, and spread of communicable diseases into the United States and among the states. Within HHS, CDC has defined its mission as protecting America from health, safety and security threats, both foreign and domestic. CDC alerts travelers about disease outbreaks and steps they can take to protect themselves. CDC also has the authority to quarantine passengers traveling from foreign countries, if necessary, to prevent the introduction, transmission, or spread of communicable disease. CDC’s National Institute for Occupational Safety and Health has conducted research and issued guidance in the past on disease transmission in aircraft and cabin crew health and, as previously noted, is funding current research through the ACER Center. CDC has also issued COVID-19 guidance for cabin crew safety. Some Technologies Could Be Useful to Reduce the Risks of Communicable Disease in Air Travel There are a variety of technologies that could help address infectious disease transmission associated with air travel, but these technologies are at various stages of maturity. For example, the initial screening of passengers for fevers is typically done with handheld infrared thermometers and has been reportedly discussed for use by Transportation Security Agents. Reports also state that the mass screening of crowds using thermal cameras has been used in some airports in Asia, but such scanners are still being tested for standalone use in the United States, with some concerns reported about the accuracy of the results. Aircraft disinfection has traditionally been done by cleaning crews, but a number of methods are being developed using heat, chemicals, and UV light, and are under examination by researchers. Chairwoman Horn, Ranking Member Babin, and Members of the Subcommittee, this completes my prepared remarks. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact me at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Jonathan Carver, Assistant Director; Paul Aussendorf; Roshni Davé; Hayden Huang; Delwen Jones; Molly Laster; Cheryl Peterson; Gretchen Snoey; and Elizabeth Wood. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The transmission of COVID-19 has been greatly aided by air travel. In light of the pandemic and warnings about the risks of air travel, U.S. passenger airline traffic fell by 96 percent in April 2020 as compared to April 2019. COVID-19 is only the latest communicable disease threat to raise public health concerns regarding the spread of contagion through air travel. Ensuring that the United States is prepared to respond to disease threats from air travel, as well as conducting the necessary research to reduce the risks of contagion, are two vital responsibilities of the federal government. This statement provides information on (1) the U.S. aviation system's preparedness to respond to communicable disease threats and (2) FAA's management of its R&D portfolio, including the extent to which disease transmission on aircraft and at airports has been the focus of FAA research. This statement is based on GAO-16-127 issued in December 2015 and GAO-17-372 issued in April 2017. GAO conducted updates to obtain information on the actions agencies have taken to address these reports' recommendations. What GAO Found The United States still lacks a comprehensive plan for national aviation preparedness to limit the spread of communicable diseases through air travel. In December 2015 during the Ebola epidemic, GAO recommended that the Department of Transportation (DOT) work with relevant stakeholders, such as the Department of Health and Human Services (HHS), to develop a national aviation-preparedness plan for communicable disease outbreaks. GAO concluded that the absence of a national plan undermined the ability of the public-health and aviation sectors to coordinate on a response or to provide consistent guidance to airlines and airports. Moreover, Annex 9 to an international aviation treaty to which the United States is a signatory contains a standard that obligates member states to develop such a plan. DOT is now confronting an even more widespread public health crisis—the Coronavirus Disease (COVID-19) global pandemic—without having taken steps to implement this recommendation. Not only could such a plan provide a mechanism for the public-health and aviation sectors to coordinate to more effectively prevent and control a communicable disease threat, it could also help minimize unnecessary disruptions to the national aviation system, disruptions that to date have been significant. Some aviation stakeholders have publicly highlighted the resulting piecemeal approach to adopting standards during the response to COVID-19, such as various airline and airport policies regarding facemasks, as demonstrating the need for a more coordinated response. The existence of a national plan might have reduced some of the confusion among aviation stakeholders and passengers. While DOT agrees that a national aviation preparedness plan is needed, the agency continues to suggest that HHS and the Department of Homeland Security have responsibility for communicable disease response and preparedness planning. GAO continues to believe that DOT is in the best position to lead this effort given its oversight responsibilities and ties with relevant aviation stakeholders. The Federal Aviation Administration (FAA) has sponsored limited federal research into disease transmission onboard aircraft and in airports. FAA's research goals focus on areas like improving airport operations and air space management, and developing new technologies, which FAA has aligned to DOT's strategic goals related to safety, infrastructure, and innovation. Based on prior work and interviews with FAA officials, GAO found that FAA's research in cabin safety for crew and passengers does not focus on disease transmission. For example, according to FAA officials, ongoing research that most closely relates to disease contamination is research related to monitoring the quality of “bleed air,” which is outside air that is drawn through jet engines into an aircraft cabin. In 2017, GAO found that FAA could be more strategic in how it develops its research and development (R&D) portfolio, chiefly in identifying long-term research needs and explaining how FAA selects projects. Of the three recommendations GAO made in that report to improve FAA's management of its R&D portfolio, FAA fully addressed one, issuing guidance in 2018 on prioritizing and selecting R&D projects. While FAA has made some progress addressing GAO's recommendations on research portfolio development and reporting, further attention to these recommendations could help ensure that FAA strategically identifies research priorities across the agency. What GAO Recommends GAO made several recommendations in its prior work, including that DOT develop a comprehensive national aviation-preparedness plan, and that FAA identify long-term R&D priorities, among other things. Progress has been made in addressing some of the recommendations. Continued attention is needed to ensure that the remainder of these recommendations are addressed.
gao_GAO-20-392
gao_GAO-20-392_0
Background In 1991, following the collapse of the Soviet Union, the U.S. government authorized the President to establish the Nunn-Lugar Cooperative Threat Reduction (CTR) program to provide nuclear security assistance to Russia and the former Soviet states. At the time, there were significant concerns about Russia’s ability to maintain adequate security over its large numbers of nuclear weapons and vast quantities of weapons-usable nuclear materials. In 1995, DOE established the Material Protection, Control, and Accounting (MPC&A) program to equip Russia and other countries with modern nuclear material security systems and promote effective nuclear material security practices. The CTR umbrella agreement with Russia—which established an overall legal framework under which the United States would provide nuclear security assistance to Russia—expired in June 2013. Joint nuclear security activities in Russia, however, continued under a multilateral agreement and a related bilateral protocol. In December 2014, in response to U.S. sanctions over Russian actions in Ukraine, the Russian government ended nearly all nuclear security cooperation with the United States. Until then, the United States had been gradually transitioning responsibility to Russia for supporting its nuclear material security systems, and it was anticipated that the U.S. MPC&A program would continue to help Russia sustain its nuclear material security systems until January 1, 2018. See figure 1 for a timeline of major events during the period of cooperation. Starting with fiscal year 2015, and with each fiscal year since, language in annual appropriations laws and national defense authorization acts has largely prohibited NNSA from funding new efforts in Russia, including nuclear material security assistance, unless the prohibition is waived by the Secretary of Energy under certain conditions. Russian Nuclear Material Sites and Structure of Relevant Russian Governmental Organizations Russia’s weapons-usable nuclear materials are stored and processed at more than two dozen sites overseen by a number of Russian entities, and the MPC&A program’s focus was on 25 of these sites at the time of our last report in 2010. The Russian State Corporation for Atomic Energy (Rosatom) is the Russian agency that manages much of Russia’s nuclear security enterprise, including seven nuclear weapons complex sites located in closed cities. These sites store and process the nuclear materials used in Russia’s nuclear weapons. Of the other 18 sites, many are overseen by Rosatom, but some are independent of Rosatom or managed by other Russian government entities. These sites often hold HEU and plutonium for research reactors or for other civilian purposes. See figure 2 for the location of the 25 Russian nuclear material sites. Other Russian government organizations with responsibilities in nuclear security include the following: Russian Ministry of Foreign Affairs (MFA). MFA is responsible for overseeing Russian policy and agreements for cooperation with the United States, including cooperation on nuclear security. Russian Federal Service of Environmental, Technological, and Nuclear Supervision (Rostekhnadzor). Rostekhnadzor is the regulator responsible for Russia’s civilian nuclear facilities. Russian Ministry of Industry and Trade (Minpromtorg). Minpromtorg coordinates nuclear material security activities and develops nuclear material security regulations for Russian naval shipbuilding sites, including Sevmash Shipyard, the primary builder of nuclear submarines for the Russian Navy. Russian Ministry of Defense. DOD and NNSA supported Russian efforts to secure Russian Ministry of Defense nuclear warheads and strategic rocket sites. That work is outside the scope of this report. NNSA’s Material Protection Control and Accounting (MPC&A) Program The MPC&A program was the primary NNSA program that worked with Russia to help improve Russia’s ability to secure its nuclear materials and its nuclear warheads. To secure Russia’s nuclear materials, the program consisted of three main efforts: Site-level projects. NNSA managed MPC&A projects at the 25 Russian nuclear material sites to upgrade security systems at those sites. Teams of specialists from across DOE’s national laboratories, referred to as U.S. project teams, identified and carried out MPC&A upgrades on behalf of NNSA. MPC&A includes the following types of security systems, among other things: physical protection systems, such as fences around buildings containing nuclear materials and metal doors protecting rooms where nuclear materials are stored; material control systems, such as seals attached to nuclear material containers to indicate whether material has been stolen from the containers, and badge systems that allow only authorized personnel into areas containing nuclear material; and material accounting systems, such as nuclear measurement equipment and computerized databases to inventory the amount and type of nuclear material contained in specific buildings and to track their location. Material control and material accounting are collectively known as material control and accounting. National-level projects. NNSA managed cross-cutting projects to enhance Russia’s national-level infrastructure to sustain MPC&A systems for nuclear materials, including enhancing Russian nuclear security culture, developing Russian regulations for MPC&A operations, and strengthening Russian inspection and oversight capabilities. Sustainability support for individual sites. NNSA also fostered development of MPC&A sustainability practices and procedures at the Russian nuclear material sites based on seven sustainability elements, such as the presence at the site of an effective MPC&A management structure that plans, implements, tests, and evaluates the site’s MPC&A systems. NNSA Completed Many of Its Planned Nuclear Security Efforts in Russia, and Had Concerns about the Sustainability of These Efforts When Cooperation Ended Based on our review of available NNSA documentation and interviews with project team personnel, we found that NNSA had completed many— but not all—site-level MPC&A projects at the 25 Russian nuclear material sites when cooperation ended in 2014. NNSA also made progress on 11 cross-cutting projects that were intended to improve Russia’s national- level nuclear material security infrastructure. In addition, NNSA made progress on supporting the ability of the 25 Russian sites to sustain nuclear material security efforts. However, at the time cooperation ended, NNSA still had a number of concerns about both the sustainability of nuclear security efforts at the 25 sites and the state of Russia’s national- level nuclear material security infrastructure. NNSA Completed Many but Not All Site-Level Projects at the 25 Russian Nuclear Material Sites Based on our review of available NNSA documentation and interviews with stakeholders, we determined that NNSA completed many MPC&A projects at the 25 Russian nuclear material sites, and stakeholders said that these upgrades significantly improved the state of nuclear material security at the sites. In particular, they told us that during the early years of the MPC&A program, the program completed upgrades focused primarily on the most significant security gaps, and in later years the program became more focused on transitioning the responsibility for sustaining nuclear security efforts to Russia. However, not all work was completed before cooperation ended, and project team members told us that the extent of completion varied by site. For example, project team members estimated that 90 percent of MPC&A projects were completed at one site, but that projects at other sites had lower levels of project completion. NNSA was unable to provide a complete set of documents detailing all projects completed and not completed across the 25 sites because several projects were consolidated into continuing programs and have not yet been closed out. In addition, the available site documentation did not always include detailed information on all projects completed or not completed. As a result we could not quantify how much planned work was completed and not completed when cooperation ended across all 25 sites. However, based on our review of available NNSA documents, we were able to identify many completed projects that included specific types of physical protection measures, material access controls, and material accounting upgrades. Project team members we interviewed and documentation we reviewed also indicated that some projects were not completed when cooperation ended. NNSA documentation identifies a variety of uncompleted projects at specific sites, such as not constructing or upgrading perimeter fencing, not replacing aging physical protection equipment, and not upgrading entry control points with vehicle radiation monitors. For example, at one site there were several kilometers of modernized perimeter fencing, guard towers, and sensors that had not been completely installed by the time cooperation ended, according to NNSA documents and project team members. Project team members told us that the site had plans to complete these projects. However, because Russia ended cooperation, the project team was unable to verify that the equipment was installed or operating appropriately. Similarly, project team members told us about two major efforts at another site that were terminated by Russia when cooperation ended: a $1 million project to relocate the guard force building to reduce the reaction time for protective forces and a $300,000 project to update software for the central alarm station and other security systems. According to project team members, the contracts were agreed to and associated costs obligated by NNSA, but Russia ended cooperation before signing the agreements. In addition, in our 2010 classified report, we found that NNSA faced challenges in implementing MPC&A upgrades against insider and outsider threats at some Russian nuclear material facilities to reduce the risk of material theft. At the time of the 2010 report, NNSA had proposed MPC&A upgrades at certain Russian sites to address these concerns, and GAO found that progress in implementing upgrades at some locations and in some MPC&A technical areas had been limited. For our classified report issued in December 2019, we asked NNSA for an update on the status of these upgrades; in response to our request, NNSA officials told us that due to a lack of cooperation, they had not received additional information from Russian counterparts to determine the status of these upgrades. NNSA Made Substantial Progress on Its Projects to Support Russia’s National- Level Nuclear Material Security Infrastructure, but Some Work Was Not Completed When Cooperation Ended In addition to site-level MPC&A security projects, NNSA managed 11 cross-cutting projects to support Russia’s national-level nuclear material security infrastructure, such as projects to enhance Russian nuclear security culture, develop Russian regulations for MPC&A operations, and strengthen Russian MPC&A inspection and oversight capabilities. We found that—at the time cooperation ended in 2014—NNSA had made substantial progress on its cross-cutting projects. NNSA reported that work was fully completed or mostly completed on at least 10 of the 11 cross-cutting projects by the time cooperation ended. However, NNSA could not provide complete documentation detailing the level of progress for some of these projects. See table 1 below for a description of these project areas. We found that NNSA had planned to do more work on some national- level projects, but that the end of cooperation in 2014 resulted in some planned work not being completed. For example, in the case of regulations development, project team members told us that the project teams had planned to develop numerous regulations with Rosatom, but these were not completed because of the end of cooperation. NNSA Made Some Progress on Improving Sites’ Abilities to Sustain Security Efforts, but NNSA Had Remaining Concerns about Sustainability When Cooperation Ended As part of its plan to shift to Russia the responsibility for nuclear material security efforts, NNSA supported the adoption of MPC&A sustainability practices and procedures at the individual Russian nuclear material sites based on seven “sustainability elements.” NNSA identified these elements, such as performance testing of systems to evaluate MPC&A effectiveness, as being fundamental to the long-term sustainability of a modern nuclear material security system. See table 2 below for more information about the seven sustainability elements. To determine a site’s ability to sustain its security systems, project teams periodically assessed each site based on the seven elements, and rated sites in each element on a scale from low to high. In our 2010 classified report, we reported the results of these sustainability assessments across the 25 Russian nuclear material sites and found that the MPC&A program had made limited progress and faced challenges in developing effective practices and procedures consistent with the seven elements of sustainability. For our classified report issued in December 2019, we reviewed and reported on the most recent sustainability assessments, largely conducted between 2012 and 2014. We compared the ratings from the most recently completed site sustainability assessments for the same 25 sites to the ratings we reported in 2010. We found that sustainability ratings generally improved, but low scores persisted at many sites and in some sustainability areas. For example, we found that the number of high ratings increased over this period by about half, and the number of low ratings decreased by about half. We believe this indicates general progress in improving sustainability across the sites. Of the seven sustainability elements, the MPC&A organization sustainability element was the element most frequently rated as “high” in the most recent assessment, and it showed the most improvement across the 25 sites. This indicates that the ability of Russian site organizations to plan and coordinate MPC&A operations had improved. We also found in our review of these assessments that NNSA had continuing concerns when cooperation ended about both the sustainability of MPC&A upgrades at individual Russian sites and the state of the national-level nuclear material security infrastructure in Russia. In their final reports after cooperation ended, U.S. project teams documented ongoing concerns with the sustainability of MPC&A upgrades at Russian nuclear material sites. We reviewed the concerns in the 25 final site summary documents and interviewed project team members who provided additional examples of these concerns. Based on our documentation review and interviews with project team members, we identified the six most common areas of concerns, including: (1) the responsiveness of protective forces, (2) performance testing the effectiveness of MPC&A systems, (3) sustainment funding, (4) physical protection systems, (5) nuclear security culture, and (6) access and cooperation at Russian sites. Stakeholders we interviewed highlighted a number of national-level concerns in other areas, such as the state of security equipment. Project team members were concerned that some of the equipment provided in the early years of cooperation had become outdated or obsolete by the time cooperation ended, such as surveillance cameras and monitoring equipment, and would need to be replaced. Little Information Is Available on Security at Russian Sites, but Nongovernmental Experts Raised Concerns about Insider Theft Risks There is little specific information available about the current state of security at Russian nuclear material sites, though anecdotal evidence suggests that nuclear material security regulations have improved and that Russia funds some nuclear security efforts. We interviewed DOE officials and national laboratory personnel about security risks and threats to Russian nuclear material security. The details of these conversations are classified. However, according to nongovernmental experts we interviewed, the theft of nuclear materials by insiders is currently considered the greatest threat to Russia’s nuclear materials. Little Specific Information Is Available about Nuclear Security at Russian Sites, but Some Information Exists on National-Level Regulatory Efforts and Security Funding According to stakeholders, little information is available about site-level security currently at the 25 sites holding Russian nuclear material, including the status of U.S. upgrades funded through the MPC&A program. Stakeholders told us that this is primarily because U.S. personnel no longer have access to the sites to observe security improvements and discuss MPC&A practices with Russian site personnel. According to DOE officials, the ability of U.S. project teams and other personnel to visit Russian nuclear material sites helped provide transparency into the state of Russian security at these facilities, such as the status of radiation portal monitors at entry points within nuclear material storage buildings. Since the end of cooperation, few U.S. personnel have visited Russia’s nuclear material sites, greatly limiting transparency into the status of U.S. security investments and Russian security practices. According to NNSA officials and U.S. project team personnel, NNSA documentation—such as the U.S. project team closeout documents that are referred to above—are based on observations primarily from 2014 or earlier. This documentation provides the most recent direct assessments of security at the site level. These officials stated that while such reports are useful for identifying the state of Russian nuclear material site security at the time cooperation ended, they likely do not provide an accurate picture of the nuclear material security at the 25 sites currently. Regarding national-level efforts in Russia to support nuclear security in the country, stakeholders we interviewed said that information exists in two main areas: development of nuclear security regulations and nuclear security funding. Development of nuclear security regulations. According to stakeholders, Russia has improved its nuclear security regulations in recent years, including since cooperation ended in 2014. Although U.S. efforts to help Rosatom develop modern MPC&A regulations ended in 2014, NNSA has continued work with Rostekhnadzor to improve Russian nuclear material security regulations through a national-level MPC&A sustainability project. Stakeholders said that this project has resulted in Russian nuclear security regulatory improvements. For example, this project provided technical support on 11 regulations, including regulations to improve vulnerability assessments of nuclear sites and nuclear materials in transit. However, stakeholders also noted some limitations. For example, they stated that compliance with regulations at nuclear material sites is mostly unknown. Similarly, the effectiveness of enforcement in cases of noncompliance is unknown, though fines are thought to be negligible. Nuclear security funding. Information on nuclear security funding is limited, according to stakeholders. Some stakeholders we interviewed stated that, based on their experiences and conversations with Russian officials, they believed that Russia was generally providing sufficient funding for nuclear material security at sites. However, others doubted that Russia was providing sufficient resources to replace the funding lost when the U.S. MPC&A program ended. Stakeholders generally agreed that funding for nuclear security likely varies by site. A few stakeholders expressed concern that security at nuclear material sites could be one of the first areas cut during an economic downturn, as nuclear security is not seen to be as significant a priority for site managers as other operations and revenue-generating activities at the sites. Nongovernmental Experts Raised Concerns about Insider Theft Risks to Russian Nuclear Materials We interviewed DOE officials and national laboratory personnel about security risks and threats to Russian nuclear material security. The details of these conversations are classified. However, according to nongovernmental experts we interviewed, the theft of nuclear materials by insiders is currently considered the greatest threat to Russia’s nuclear materials. According to nongovernmental experts we interviewed, Russia’s nuclear security culture generally does not prioritize protection against the threat of nuclear material theft by insiders, a threat that modern nuclear security systems are designed and maintained to prevent. For example, experts said that Russian nuclear material site managers were more likely to devote resources—such as training, manpower, and funding— to measures that protect facilities from outsider threats, and less likely to devote resources to measures that protect facilities against insider threats. Experts told us that while the MPC&A program advanced Russian appreciation of the insider threat during the period of cooperation, they were concerned that—without U.S. influence and training—protection against insider threats would still be insufficient and likely ignored unless the Russian government required such protection, which was not the case when cooperation ended. As a result, according to experts, Russian sites are likely not currently supporting MPC&A systems adequately to counter insider threats. One nongovernmental expert noted that Russian security services have assumed greater control and tightened security in the closed cities that contain the vast majority of Russia’s nuclear materials, and that this may have reduced the near-term threat from insiders. However, according to this expert, over time this reliance on the security services could create vulnerabilities. For example, some Russian sites may rely too heavily on the physical security elements of nuclear security systems—such as guard forces—to protect nuclear materials and may become complacent in modernizing other elements, such as material control and accounting practices to deter and prevent insider theft risks, or measures that can protect against other emerging, nontraditional threats such as drone or cyber risks. According to nongovernmental experts, other factors in the country may also exacerbate the risk of theft posed by both outsiders and insiders to Russia’s nuclear materials. For example, experts said the existence of massive amounts of weapons-usable nuclear materials at many dispersed sites across Russia is the primary factor that makes Russia’s nuclear materials a greater threat than the nuclear materials held in most other countries. In addition, according to experts, persistent corruption and existing terrorist groups near some of the closed cities are other contributing factors that could further increase the risk of theft. According to Stakeholders, Opportunities May Exist for Cooperation to Improve Russian Nuclear Material Security, but Such Cooperation Would Face Challenges According to stakeholders, there could be opportunities to help Russia improve aspects of its nuclear security system that NNSA and others identified as continuing risks. However, stakeholders noted that any future cooperation would likely be limited in scope and would face considerable political challenges. Future Cooperation Would Likely Be Limited but Could Still Help Address Remaining Nuclear Material Security Risks in Russia According to stakeholders we interviewed, there could be opportunities for future U.S.-Russia cooperation to address some of the continuing nuclear security risks in Russia. However, stakeholders said that any future cooperation would likely differ dramatically from the donor-recipient model of the past MPC&A program. The Russian government would likely expect to be treated as an equal and would not want to be seen as a recipient of U.S. funds for infrastructure improvements. Therefore, the scope of future cooperation would likely be a limited partnership, would primarily involve training and information sharing rather than directly supporting security upgrades at Russian sites, and would require fewer U.S. resources than the past MPC&A program did. Stakeholders told us that engagement and cooperation are important because of the size of the Russian nuclear complex, the large amounts of Russian nuclear material, and the continuing security concerns in certain areas. Stakeholders told us they believed there would be security benefits to the United States in resuming nuclear security cooperation with Russia in some form. Stakeholders generally identified increased transparency and advancing security best practices as the two main benefits to nuclear security cooperation. Stakeholders we spoke to identified examples of opportunities for cooperation that could support U.S. interests by providing information on the security of Russia’s nuclear materials and by helping Russia improve nuclear material security practices and procedures. These include the following: Exchange of best practices. Stakeholders noted that the United States and Russia could share MPC&A best practices in conferences and workshops. Best practices could cover areas such as performance testing of MPC&A systems, insider threat protection, and material control and accounting. Some stakeholders said that Russian expertise, such as in nuclear forensics, could increase U.S. knowledge and potentially improve U.S. practices in certain areas. Technical exchanges. Stakeholders told us that there could be benefits to both the United States and Russia from reciprocal technical exchanges or meetings of nuclear security experts to review specific, technical MPC&A practices that each country employs. National laboratory personnel noted that past exchanges under the MPC&A program allowed Russian personnel to view MPC&A systems at U.S. facilities, which helped Russian personnel understand the features of modern MPC&A systems, such as insider threat prevention measures. U.S. personnel participated in reciprocal visits to view security measures at sites in Russia, which helped them understand Russian security practices. Stakeholders told us that such technical exchanges could help U.S. personnel better understand the state of Russian nuclear security funding and current Russian practices. Training. Experts and national laboratory personnel noted that training Russian personnel on technical matters—such as how to conduct comprehensive vulnerability assessments—could improve Russian security practices. Conversations on legal agreements. Some stakeholders said that initiating conversations with Russia on the status of existing but suspended legal agreements could provide an opening for other forms of cooperation. For example, a few stakeholders mentioned an existing—but suspended—research and development agreement from 2013 under which future nuclear security cooperation might be pursued if both parties were interested in reactivating the agreement. Cooperation within multilateral organizations. Some stakeholders noted that existing multilateral organizations, such as the International Atomic Energy Agency (IAEA) and the Global Initiative to Combat Nuclear Terrorism, could provide venues for the United States to pursue cooperative opportunities with Russia. For example, Russia and the United States could cooperate on developing recommendations to the IAEA on physical protection measures for nuclear material, which could then be shared with IAEA member states. Other opportunities. The Nuclear Threat Initiative, a U.S. nongovernmental organization (NGO), and the Center for Energy and Security Studies, a Russian NGO, coauthored a report that identified 51 mutually beneficial opportunities to cooperate in nuclear security, nuclear safety, nuclear energy, nuclear science, and nuclear environmental remediation. For example, the report identifies an opportunity for Russian and U.S. experts to establish a joint research and development program to improve nuclear security technologies to address emerging threats to nuclear material storage sites, such as drones. Russia would likely insist that it and the United States be seen as equal partners under any future arrangement or program for cooperation on nuclear security, according to stakeholders. However, U.S. project team personnel told us that Russian nuclear material sites often lack the financial resources to pay travel costs for Russian personnel or to cover costs for venues or workshops necessary for training or the exchange of best practices. Therefore, the level of funding to support any potential future cooperation might be disproportionate between the United States and Russia. Because we were unable to obtain views from Russian officials and Russian nuclear material site representatives, we were unable to establish the extent to which Russia would be willing to pursue any form of nuclear material security cooperation with the United States, regardless of funding sources and requirements. Potential Cooperation Faces Significant Challenges Stakeholders we interviewed were generally pessimistic about cooperation under the current political and diplomatic climate, and they noted that the deterioration of political relations is the most significant challenge to any future cooperation. Stakeholders identified other specific challenges, including the following: Funding prohibition. Some stakeholders said that provisions in recent appropriations acts and National Defense Authorization Acts (NDAA) prohibiting NNSA from funding nuclear security activities in Russia have been obstacles to cooperating on nuclear security matters. In a report submitted to Congress in May 2019, NNSA stated that “the lack of ability to sign new contracts or engage on a modest scale denies NNSA the insights necessary to directly monitor nuclear material security in Russia and the sustainment of past security improvements.” According to U.S. officials and U.S. project team personnel, the prohibition largely prevents U.S. personnel from sharing best practices with and training Russian counterparts, and the existence of the prohibition discourages U.S. and Russian personnel from interacting and maintaining relationships. Although the acts allow the Secretary of Energy to waive the prohibition under certain conditions, no secretary has done so since a prohibition was first included in the fiscal year 2015 appropriations act. In addition, according to NNSA officials we interviewed, the language describing waiver requirements in NDAAs has become more restrictive in recent years. Initially, the Secretary of Energy could waive the prohibition on the basis of a notification to certain congressional committees that the waiver was in the national security interest of the United States, an accompanying justification, and the passage of 15 days. Starting with the fiscal year 2017 NDAA, however, a waiver can only be issued if it is necessary to address an urgent nuclear-related threat in Russia, and any such waiver requires concurrence from the Secretary of Defense and the Secretary of State. Russian conditions on cooperation. Stakeholders we interviewed said that Russia has set conditions on any future nuclear security cooperation. For example, they said that Russia has indicated that it is unwilling to discuss nuclear security cooperation with the United States unless the United States is willing to discuss related areas, such as nuclear energy, nuclear safety, and nuclear science. According to stakeholders, in the past the United States has been unwilling to discuss these other areas as a condition for cooperating on nuclear security. Russian antagonism to U.S. security efforts. Stakeholders noted antagonism at some levels of the Russian government toward U.S. nuclear security efforts. For example, although Russia participates in nuclear security efforts at the IAEA, some stakeholders noted that Russia regularly obstructs U.S. initiatives and recommendations in that organization. As noted above, stakeholders view the general deterioration of political relations between the United States and Russia as the greatest challenge to cooperation, and it is not clear whether Russia is prepared to reengage with the United States on these or other options for rekindling U.S.- Russian nuclear security cooperation. We reached out to the Russian government to request meetings with Russian government officials and representatives of nuclear material sites who could provide Russian perspectives on efforts to secure Russia’s nuclear materials, the status of past U.S. nuclear material security investments, and potential opportunities for cooperation. The Russian government declined our requests to meet with these officials and site representatives. Therefore, without Russian perspectives on the likelihood of possible future cooperation, we were unable to determine whether changes to U.S. policy, such as lifting the funding prohibition, would have any meaningful effect on the status of nuclear security cooperation between the United States and Russia. Agency Comments We provided a draft of the classified version of this report to NNSA for review and comment. NNSA had no comments on the report. We are sending copies of this product to the Senate Armed Services Committee, the NNSA Administrator, and the Secretaries of Defense and State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report (1) examines the extent to which the National Nuclear Security Administration’s (NNSA) planned nuclear material security efforts in Russia were completed when cooperation ended and what nuclear security concerns remained, (2) describes what is known about the current state of nuclear material security in Russia, and (3) describes stakeholder views on potential opportunities for future U.S.-Russian nuclear security cooperation. For all three objectives, we identified and interviewed relevant stakeholders, including U.S. government officials from NNSA, the Department of Energy (DOE), the State Department, and the Department of Defense; experts on Russian nuclear security from academia and nongovernmental organizations (NGO); and knowledgeable personnel at six U.S. national laboratories that supported U.S. nuclear security efforts in Russia, including personnel at Brookhaven National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratory. We identified the stakeholders by contacting government agencies and NGOs with nuclear security expertise and asking them to identify other knowledgeable stakeholders. We reached out to these other knowledgeable stakeholders and interviewed those who responded and were willing to speak with us. To identify nongovernmental experts, we compiled a list of individuals who stakeholders identified as having expertise in the area of nuclear security in Russia. We also worked with a staff librarian to conduct an independent search of published literature to identify nongovernmental experts who had authored multiple publications related to Russian nuclear security. In addition, to ascertain whether an individual should be considered a nongovernmental expert, we considered other information, such as invitations to speak at nuclear security panels, being an editor of nuclear security related journals, and relevant positions in academic and other nongovernmental institutions. We interviewed six nongovernmental experts who fit these criteria. To examine the extent to which NNSA’s planned nuclear material security efforts in Russia were completed when cooperation ended and what nuclear security concerns remained, we reviewed documents prepared by NNSA and the national laboratories for each of the 25 nuclear material sites in Russia where the United States worked previously with Russia to improve security. To identify NNSA sustainability programs at a national level, we reviewed GAO reports and NNSA project documentation. We also reviewed NNSA guidelines that detailed how project teams were to support and assess the ability of Russian sites to sustain their material protection, control, and accounting (MPC&A) systems. We reviewed the NNSA documents that assessed site sustainability and analyzed how site sustainability had changed at sites by the end of cooperation. These documents included project team assessments for each of the 25 sites in seven different sustainability elements. In these assessments, project teams provided ratings from low to high on the extent to which sites were prepared to sustain these areas. We also reviewed NNSA documents and identified concerns that site teams documented about site sustainability. We then analyzed the concerns from the 25 sites and grouped similar concerns into categories. We developed these categories based on the similarity of the concerns, definitions of key nuclear security areas in NNSA documents, and professional judgement. We then identified the six concerns that appeared most frequently, which accounted for about 70 percent of all concerns. To describe what is known about the current state of nuclear security in Russia—in addition to interviews with our stakeholder group—we reviewed U.S. government and open-source documents. Specifically, we reviewed reports from the International Panel on Fissile Materials, the Nuclear Threat Institute, the National Academies of Science, and a national laboratory; articles on Russian nuclear security; and periodic reports on Russian nuclear security published by an expert independent consultant. In addition to general internet searches for published documents relating to Russian nuclear security and the MPC&A program, we conducted literature searches of published materials with assistance from a staff librarian; we excluded from our literature review any search results that were published prior to 2014 or were not related to nuclear material security in Russia. In addition to unclassified interviews with U.S. government officials on Russian nuclear material security, we received classified briefings from DOE. We requested threat and risk information relating to Russian nuclear material security from the Central Intelligence Agency, but we were not provided this information. To describe stakeholder views on potential opportunities for future U.S.- Russia nuclear security cooperation, we interviewed those in our stakeholder group identified above. We also reviewed administration plans and reports, including the National Security Strategy, the National Strategy for Countering Weapons of Mass Destruction Terrorism, and NNSA’s May 2019 Report to Congress describing NNSA’s funding of nuclear security improvements in Russia. To inform our understanding of the prohibition on NNSA’s expenditures on nuclear security in Russia, we reviewed laws since fiscal year 2015 that restricted relevant NNSA funding in some way. In addition, to obtain Russian perspectives on nuclear material security and past U.S. efforts, we requested—through the State Department and the U.S. Embassy in Moscow—interviews with Russian officials at relevant Russian agencies and representatives at five Russian nuclear material sites. However, the Russian government declined our request to meet with these officials and representatives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments David Trimble, (202) 512-3841 or trimbled@gao.gov In addition to the contact named above, William Hoehn (Assistant Director), Dave Messman (Analyst in Charge), and Dan Will made key contributions to this report. Antoinette Capaccio, Ellen Fried, Greg Marchand, Dan Royer, and Sara Sullivan also contributed to this report.
Why GAO Did This Study Russia possesses the world's largest stockpile of weapons-usable nuclear materials, largely left over from the Cold War. These nuclear materials could be used to build a nuclear weapon if acquired by a rogue state or terrorist group. Starting in 1993, and for the next 2 decades, DOE worked with Russia to improve security at dozens of sites that contained these nuclear materials. In 2014, following Russian aggression in Ukraine and U.S. diplomatic responses, Russia ended nearly all nuclear security cooperation with the United States. The Senate report accompanying the Fiscal Year 2019 National Defense Authorization Act includes a provision for GAO to review NNSA's efforts to improve Russian nuclear material security. This report (1) examines the extent to which NNSA had completed its planned nuclear material security efforts when cooperation ended and what nuclear security concerns remained, (2) describes what is known about the current state of nuclear material security in Russia, and (3) describes stakeholder views on opportunities for future U.S.-Russian nuclear security cooperation. To address all three objectives, GAO interviewed U.S. government officials, personnel from DOE's national laboratories, and nongovernmental experts. In this report, GAO refers to all of these groups as stakeholders. GAO also reviewed relevant U.S. government plans, policies, and program documentation. GAO requested the opportunity to interview Russian officials and representatives at nuclear material sites for this review, but the Russian government denied this request. What GAO Found Over more than 2 decades starting in the early 1990s, the Department of Energy (DOE) and its National Nuclear Security Administration (NNSA) completed many of their planned efforts to improve nuclear material security in Russia, according to DOE documentation, U.S. government officials, and nuclear security experts. These efforts, carried out primarily through NNSA's Material Protection, Control, and Accounting (MPC&A) program, included a range of projects to upgrade security at dozens of Russian nuclear material sites, such as the installation of modern perimeter fencing, surveillance cameras, and equipment to track and account for nuclear material. However, not all planned upgrades were completed before cooperation ended in late 2014. NNSA also completed many—but not all—of its planned efforts to help Russia support its national-level security infrastructure, such as by helping improve the security of Russian nuclear materials in transit. In addition, NNSA made some progress in improving each site's ability to sustain its security systems, such as by training Russian site personnel on modern MPC&A practices and procedures. NNSA documentation that GAO reviewed showed that by the time cooperation ended, Russian sites had generally improved their ability to sustain their MPC&A systems, but this documentation showed that concerns remained. According to stakeholders, there is little specific information about the current state of security at Russian nuclear material sites because U.S. personnel no longer have access to sites to observe security systems and discuss MPC&A practices with Russian site personnel. However, stakeholders said there is some information on national-level efforts. Specifically, stakeholders said that Russia has improved regulations for some MPC&A practices, and there are signs that Russian sites receive funding for nuclear material security, though it is unlikely that Russian funding is sufficient to account for the loss of U.S. financial support. Regarding threats to Russia's nuclear material, nongovernmental experts GAO interviewed raised concerns about the risk of insider theft of Russian nuclear materials. Experts stated that it is likely that Russian sites have maintained nuclear material security systems to protect against threats from outsiders, but it is unlikely that sites are adequately protecting against the threat from insiders. Stakeholders said that there may be opportunities for limited future cooperation between the two countries to help improve Russian nuclear material security. Such opportunities could include technical exchanges and training. These opportunities could provide the United States with better information about the risk posed by Russia's nuclear materials and could help address areas of concern, such as by training Russian personnel to help sites better address the insider threat. However, any potential cooperation faces considerable challenges, according to stakeholders, most notably the deterioration of political relations between the two countries. In addition, stakeholders said that cooperation is challenged by current U.S. law, which generally prohibits NNSA from funding nuclear security activities in Russia; by Russian antagonism toward U.S. proposals to improve nuclear material security internationally; and by Russian conditions for cooperation that the United States has not been willing to meet.
gao_GAO-20-239
gao_GAO-20-239_0
Background Secret Service Areas of Responsibility and Organization The Secret Service pursues two areas of responsibility simultaneously— protection and criminal investigations. The Secret Service’s Office of Protective Operations oversees the agency’s protective divisions, including the Presidential Protective, Vice Presidential Protective, and Uniformed Divisions. These divisions carry out permanent protective details and other protection-related assignments. Permanent protectees, such as the President and Vice President, have special agents permanently assigned to them from the Presidential Protective Division or Vice Presidential Protective Division. The Secret Service provides protection for the President, Vice President, and their families at all times. In fiscal year 2017, the Presidential and Vice Presidential Protective Divisions provided protection for 30 presidential and vice- presidential foreign trips in addition to providing protection for members of the President’s and Vice President’s families. The Uniformed Division protects certain facilities, including the White House and the Treasury Building, among others. Figure 1 illustrates an organizational chart of offices within the Secret Service. The Office of Investigations oversees the agency’s field activities, including investigations into crimes targeting the nation’s financial systems; surveys of locations a protectee may visit; investigations of threats to protected persons and facilities; and temporary support for protection. Figure 2 provides information about the components in the Office of Investigations. The Office of Investigations oversees the agency’s 21 international field offices and 141 domestic offices, consisting of 42 field offices, 60 resident offices, 13 resident agencies, and 26 domiciles. Special agents in these offices conduct investigations to identify, locate, and apprehend criminal organizations and individuals targeting the nation’s critical financial infrastructure and payment systems. Figure 3 shows the locations of Secret Service’s domestic field offices, resident offices, and resident agencies. Secret Service Investigations Although the Secret Service was originally founded to investigate the counterfeiting of U.S. currency, the agency’s investigations now span a number of financial and computer-based crimes. Pursuant to 18 U.S.C. § 3056(b)(2), under the direction of the Secretary of Homeland Security, the Secret Service is authorized to detect and arrest any person who violates any of the laws of the United States relating to coins, obligations, and securities of the United States, including the investigation of the counterfeiting of U.S. currency. In addition, the Secret Service is authorized to identify, locate, and apprehend criminal organizations and individuals that target the nation’s critical financial infrastructure and payment systems. Secret Service special agents investigate financial crimes such as access device fraud (including credit and debit-card fraud); identity crimes and theft; business email compromise; bank fraud; and illicit financing operations. In addition, the agency investigates cybercrimes, including network intrusions, ransomware, and cryptocurrency, among other criminal offenses. The Secret Service also provides forensic and investigative assistance in support of investigations involving missing and exploited children. Finally, Secret Service special agents may investigate and make arrests for any offense against the United States committed in their presence, or any felony cognizable under the laws of the United States if they have reasonable grounds to believe that the person to be arrested has committed or is committing such felony. For more information on the evolution of the Secret Service’s statutory authorities, see appendix III. Secret Service Special Agent Career Progression and Pay The Secret Service has established three phases for a special agent’s career, in which the special agent contributes to both investigative and protective operations—Phase 1: Career Entry/Field Office Assignment; Phase 2: Protective Assignment; and Phase 3: Post-Protective Field, Protection, or Headquarters Assignment. During Phase 1, after being hired and receiving 7 months of training, the special agent is assigned to a field office for at least 3 years, where the special agent performs investigations and participates in temporary protective assignments locally and away from the special agent’s home office. In Phase 2, the special agent is assigned for up to 8 years to a permanent protective detail or to one of the Secret Service’s specialty divisions, such as the Office of Strategic Intelligence and Information. In Phase 3, the special agent may return to a field office, serve in headquarters-based specialized roles, or continue permanent protection duty. Figure 4 illustrates the Secret Service’s special agent career progression model. Secret Service special agents are paid in accordance with the Office of Personnel Management’s general schedule, which determines the pay structure for the majority of civilian white-collar Federal employees. In addition to standard pay under the general schedule, special agents are eligible for law enforcement availability pay (LEAP). The Law Enforcement Availability Pay Act of 1994, as amended, established a uniform compensation system for federal criminal investigators who, by the nature of their duties, are often required to work excessive and unusual hours. The purpose of LEAP is to provide premium pay to criminal investigators to ensure their availability for unscheduled work in excess of a 40-hour workweek based on the needs of the employing agency. The LEAP Act authorized a 25 percent increase in base salary (LEAP premium pay) as long as specific requirements of the LEAP Act are met. Among these requirements is a condition that criminal investigators maintain an annual average of 2 or more unscheduled duty hours per workday. Federal employees under the general schedule are subject to caps on pay equal to the highest pay level in the general schedule. In recent years, legislation has been enacted to raise this pay cap for Secret Service special agents who, due to the high number of hours they worked, were not otherwise compensated for all hours worked. In 2016, the Overtime Pay for Protective Services Act of 2016 authorized any officer, employee, or agent employed by the Secret Service who performs protective services for an individual or event protected by the Secret Service during 2016 to receive an exception to the limitation on certain premium pay within certain limits. The Secret Service Recruitment and Retention Act of 2018 extended the Secret Service-specific waiver of the pay cap for basic and premium overtime pay through 2018 and included agents within the Secret Service Uniformed Division. Subsequently, the Secret Service Overtime Pay Extension Act extended the Secret Service- specific waiver through 2020. Office of Investigations Generally Supports Protection, but Has Not Identified Investigations That Best Prepare Agents for Protection The Office of Investigations Supports Protective Operations in Numerous Ways The Secret Service’s Office of Investigations supports protective operations in a variety of ways. According to our analysis of Secret Service data, special agents assigned to the Office of Investigations expended 11.2 million hours supporting protective operations during fiscal years 2014 through 2018. These 11.2 million hours accounted for 41 percent of all protection hours recorded by Secret Service law enforcement personnel during that period. Figure 5 shows the number of hours Secret Service law enforcement personnel expended on protection, including the percentage expended by special agents in the Office of Investigations. Protective Operations Tasks The Office of Investigations conducts numerous tasks in support of protective operations, including temporary protective assignments, protective intelligence investigations, and critical systems protection. Temporary protective assignments. When a Secret Service protectee travels, special agents in the Office of Investigations carry out numerous tasks, on a temporary basis, to assist the agency’s protective operations. These special agents facilitate preparations for a protectee visit and safeguard locations. For example, special agents may review the vulnerabilities of a site, conduct motorcade route planning, and coordinate with special agents on the permanent protective detail and with state and local law enforcement. In addition, these special agents provide physical protection when the protectee arrives. Special agents assigned to the Office of Investigations also travel to provide temporary protection and assist during presidential campaigns and National Special Security Events. During presidential campaigns, these special agents may accompany certain presidential candidates and their family members to provide 24/7 protection, and may also work on advance teams that provide site security for campaign events. Protective intelligence investigations. The Office of Investigations assists with the agency’s protective intelligence efforts by investigating threats against protected persons, including the President, and protected facilities, such as protectee residences. According to a Senior Secret Service official, special agents in the Office of Investigations locate, interview, and monitor individuals that make threats to a protectee. In fiscal year 2018, the Secret Service opened 2,011 protective intelligence investigations. Critical systems protection. The Critical Systems Protection program identifies, assesses, and mitigates risk posed by information systems to persons and facilities protected by the Secret Service. The program is coordinated by special agents in the Office of Investigations, and according to a Senior Secret Service official, the program draws on the investigative experience that special agents have developed in the Office of Investigations. For example, the official told us that, through the Critical Systems Protection program, the agency may monitor electronic systems that could be compromised in a hotel where a protectee is staying. Additional Ways the Office of Investigations Benefits Protection The Office of Investigations can provide other benefits to protective operations, such as providing support during periods of increased protection demand and, according to special agents we interviewed, developing relationships with local law enforcement that assist with protective operations. Below are examples of these potential benefits. Support during periods of increased protection demand. The Office of Investigations can shift the focus of its special agents from investigations to protection during periods of increased protection demand. For example, according to Secret Service officials, in fiscal year 2016, the Office of Investigations shifted special agents from criminal investigations to help meet the additional protection demands of the 2016 Presidential Campaign. As shown in figure 6, in fiscal year 2014 special agents assigned to the Office of Investigations spent 52 percent of their time on investigations and 39 percent on protection. These percentages shifted to 31 percent on investigations and 58 percent on protection in fiscal year 2016. Secret Service officials told us that the percentage of hours that special agents spent on protection remained elevated after fiscal year 2016 due to protection demands associated with the President and his family. Pre-established state and local relationships. Resources and support from local law enforcement are needed for the Secret Service to carry out its protective operations, according to senior Secret Service officials. In our interviews with 40 current and former special agents, 38 reported that Secret Service personnel develop relationships with state and local law enforcement while conducting investigations, and that these relationships can benefit protective operations. Twenty-two special agents noted that contacts with state and local law enforcement are pre-established as a result of the agency’s investigative operations. Twenty special agents reported that assets or resources are more readily provided by their state and local partners because of the relationships they have built. In addition, special agents said that relationships developed with state and local law enforcement are either necessary for (11 special agents) or improves (8 special agents) the Secret Service’s protective activities. This is consistent with our prior reporting on the topic. Specifically, in our February 2016 review of Secret Service field offices, we reported that special agents in each of the 12 domestic offices we interviewed emphasized that it would not be possible to protect visiting dignitaries without extensive assistance from state and local law enforcement partners. For example, state and local law enforcement partners may provide equipment such as helicopters, vehicles, and communication equipment during dignitary visits. Supports employee retention and work-life balance. Secret Service officials told us that special agents generally cannot work protective assignments for their entire career, and that investigations help support a more reasonable work-life balance for special agents. A senior Secret Service official described that protective assignments require a high level of readiness and threat consciousness, which can lead to significant psychological stress that cannot be sustained for a 25-year career. Another Secret Service official told us that some special agents can spend 100 or 200 nights away from home per year on protective assignments, and that some special agents do not want to work on protection full-time. Seventy-five percent (30 of 40) of the special agents we interviewed reported that their work-life balance is better while working on an investigation versus a protective assignment. For example, eighteen special agents reported that investigative operations have more normal working hours than protective operations. Special agents also reported that working protective operations requires that they spend more time away from home than investigations (12 special agents) and requires a work schedule dictated by someone else’s (i.e., the protectee’s) schedule (14 special agents). Most Special Agents We Interviewed Reported That Investigative Responsibilities Did Not Negatively Affect Protection, but Some Highlighted Multitasking Difficulties Most special agents we interviewed did not report any instances where they were unable to fulfill a protective assignment due to investigative demands. Of the 40 special agents we interviewed, 35 said there had never been an instance in which they were unable to fully execute a protection-related assignment as a result of their investigative responsibilities. The five special agents who said there were instances in which they could not personally serve in an assignment reported an issue related to staffing. For example, a special agent would have been assigned to a temporary protective activity, but they already had an investigative commitment (e.g., serving as a trial witness). According to Secret Service officials, in these instances special agents are replaced before the protective assignment begins, and thus, there is no negative effect on protective operations. During the course of our interviews, 23 special agents said that during the last two years they frequently or sometimes were required to work on investigations while they were assigned to temporary protective operations. Examples provided by these special agents included working on investigations during protective shifts, before and after protective shifts, and during breaks to pursue investigative leads and respond to U.S. Attorneys. Additional examples associated with this topic are sensitive and have been omitted from this report. These statements are consistent with those expressed in an August 2016 report assessing quality-of-life issues at the Secret Service. While Investigations Can Help Special Agents Develop Skills for Protection, Secret Service Has Not Identified Which Specific Investigative Activities Best Prepare Special Agents for Protective Assignments Senior Secret Service officials told us that investigations can help prepare Phase 1 special agents for the protective responsibilities required in Phase 2 of their career, which includes an assignment to a permanent protective detail or a specialty division (e.g., counter-assault team). However, the agency has not identified which types of investigations and related activities best prepare special agents for Phase 2, or established a framework to help ensure Phase 1 special agents work on such cases and activities to the extent possible. As described earlier, special agents typically start their careers as Phase 1 special agents in a field office, and work on criminal investigations. Twenty-six of the 40 current and former special agents we interviewed reported that investigations are important in developing the skills necessary for protective assignments. Special agents we interviewed offered examples of skills developed, such as communication, interviewing, and operational planning skills; greater attention to detail; and experience working with law enforcement partners. Special agents further stated that certain types of investigations can offer more skill development opportunities than other types of investigations. For example, 18 special agents we interviewed reported that working on protective intelligence cases can help prepare special agents for protective operations. A senior official in the Office of Protective Operations agreed, and told us that experience with protective intelligence investigations allows special agents to gain insight into both the protectees and the threats against them. In addition, six special agents identified cyber investigations as helping prepare special agents for protective operations. However, 15 special agents reported a type of Secret Service investigation that does not help them develop protection skills. For example, nine special agents said financial crime investigations (e.g., credit card fraud) are not helpful in preparing special agents for protection. As one special agent described, the skills developed from financial investigations do not translate to protection. Similarly, five special agents said that investigations into counterfeiting are not helpful in preparing special agents for protection. The Secret Service’s December 2017 Office of Investigations Priorities and Roadmap states that the office must continually look to identify areas where the expertise it has developed for investigative purposes can be leveraged to advance the Secret Service’s ability to perform its protective responsibilities. In addition, consistent with Standards for Internal Control in the Federal Government, effective management of the Secret Service’s workforce is essential to achieving results, as is continually assessing knowledge, skill, and ability needs of the organization, and establishing training aimed at developing and retaining employee knowledge, skills, and abilities to meet changing organizational needs. Further, according to leading management practices related to training and development efforts, adequate planning allows agencies to establish priorities and determine the best ways to leverage investments to improve performance. However, Secret Service officials told us the agency has not identified which of its current types of criminal investigations and related activities best prepare special agents for protective responsibilities, nor has it established a framework to help ensure that Phase 1 special agents gain experience in those areas to the extent possible. According to Secret Service officials, a list of investigative experiences beneficial to protective assignments existed in the past; however, the list is no longer used in practice and a copy of the list no longer exists. Special agents we interviewed reported that certain types of investigations (e.g., protective intelligence investigations) are more helpful than others in preparing them for protective assignments. Secret Service officials agreed that identifying the types of investigations and activities that best prepare special agents for protective responsibilities, as well as developing a framework to help ensure Phase 1 special agents have the opportunity to work on such cases to the extent possible, could help better prepare their special agents for the protective responsibilities required in Phase 2 of their careers. In addition, a framework could better support the Secret Service’s protective operations by focusing Phase 1 training on building skills needed for successfully executing protective responsibilities. It could also help make Phase 1 special agents more readily available to assist the agency when faced with a surge in protective responsibilities. Secret Service and Selected Federal Agencies Investigate Similar Financial Crimes, Which Federal Prosecutors We Interviewed Reported to Be Beneficial Types of financial crimes most often prosecuted by U.S. Attorneys based on Secret Service referrals during fiscal years 2014 through 2018 were similarly investigated by four additional federal law enforcement agencies, including the FBI, Homeland Security Investigation, IRS Criminal Investigation, and the U.S. Postal Inspections Service. As shown in figure 7 below, the selected agencies served as lead investigators in a total of 14,669 prosecuted cases across six financial crimes offense types during fiscal years 2014 through 2018, with Secret Service serving as the lead on 31 percent (4,620) of the cases. The Secret Service served as the lead investigating agency on more counterfeiting and forgery, identity theft, and aggravated identity theft cases prosecuted by U.S. Attorneys than any of the other selected law enforcement agencies during fiscal years 2014 through 2018. For example, the Secret Service served as the lead investigative agency on 1,368 counterfeiting and forgery cases that were prosecuted during this time period, while the FBI led 66 cases and IRS Criminal Investigations led six cases that were prosecuted (see figure 7). Although Secret Service was the lead investigative agency on the vast majority of counterfeiting and forgery prosecutions compared to the selected agencies, some types of cases were more evenly divided among the selected agencies. For example, between 2014 and 2018, U.S. Attorney’s Offices prosecuted 608 aggravated identity theft cases for which the Secret Service was the lead investigating agency, while the FBI led 484 prosecuted cases, U.S. Postal Inspections Service led 454 prosecuted cases, and IRS Criminal Investigations led 383 prosecuted cases. All 12 of the federal prosecutors we interviewed told us that the benefits of the Secret Service and selected agencies investigating similar crimes outweigh the drawbacks. These prosecutors highlighted the following three benefits: (1) additional staff resources; (2) agency-specific expertise; and (3) value added by having agencies work together on cases. For instance, three federal prosecutors we interviewed said that the occurrence of financial and cybercrimes in their district was pervasive, and that the number of criminal complaints they received far exceeded the number of federal agents available to investigate. With regard to agency-specific expertise, one federal prosecutor noted that although multiple agencies may conduct counterfeiting investigations, the Secret Service has expertise in this area that is appreciated by local businesses, such as casinos. Finally, agency collaboration can benefit criminal investigations, as in a June 2018 case in which the Department of Justice announced a coordinated effort to disrupt schemes designed to intercept and hijack wire transfers from businesses and individuals. The effort included an investigation by Secret Service and the FBI in which 23 individuals were charged in the Southern District of Florida with laundering at least $10 million. In addition, although the Secret Service and selected federal agencies can investigate similar crimes, federal prosecutors told us that federal agencies prioritize different types of crimes or cases. For example, eleven federal prosecutors told us that the Secret Service was the only agency that referred counterfeiting cases to their district, and 6 federal prosecutors said the Secret Service was the only agency that referred protective intelligence or threat cases. Further, according to senior FBI officials, they generally investigate large-scale financial crimes. On the other hand, the Secret Service may be willing to investigate financial crimes with smaller losses than the FBI, according to senior FBI officials and two federal prosecutors we spoke with. Table 1 below includes the mission and investigative priorities of the Secret Service and selected federal agencies. Although nine of 12 federal prosecutors we interviewed stated that there are no drawbacks to the Secret Service investigating crimes similar to those investigated by selected federal agencies, two of 12 federal prosecutors and one federal agency official identified drawbacks related to deconfliction and case assignment. Specifically, one prosecutor told us that, in the past, there was a greater need for deconfliction between the Secret Service and the FBI, but that deconfliction had not been an issue in the last 18 months. In addition, FBI officials in one field office told us that although the Secret Service and the FBI generally coordinated and worked well together, sometimes there were instances in which they could have deconflicted earlier in an investigation. Another federal prosecutor told us that it may be difficult to know what federal law enforcement agency would be best to assign an investigation since in the early stages of an investigation, the federal prosecutor’s office may lack adequate case information to know what law enforcement agency would be best positioned to conduct an investigation. Secret Service Developed a Plan to Combat Priority Criminal Threats, but Does Not Know the Extent to Which Resources Are Dedicated to Each Priority Secret Service Has Defined Priority Criminal Threats, but Lacks a Documented Process to Consistently Ensure Resources Align with these Priorities In December 2017, the Secret Service released the Office of Investigations Priorities and Roadmap (Roadmap). The Roadmap states that fiscal constraints require that the agency prioritize its efforts and take steps to ensure that resources are aligned with its criminal investigative priorities. It further states that the Secret Service will align enterprise-wide investigative activities from independent or uncoordinated cases into a systematic, well-prioritized, and targeted operation to counter the networks of transnational criminals that present risks to financial and payment systems. Towards this effort, the Roadmap states that the Office of Investigations will “counter the most significant criminal threats to the financial and payment systems of the United States through criminal investigations,” and that these investigations will focus on three priority criminal threats: Criminal activity with significant economic and financial impacts to the United States. Criminal activity, such as cybersecurity threats, that operate at scale and present emergent or systemic risks to financial and payments systems. Transnational criminal activity involving corruption, illicit finance, fraud, money laundering, and other financial crimes. To implement the Roadmap, the Office of Investigations was to identify investigative targets, such as specific criminal networks or activities, and develop campaign plans for each investigative target. As described in the Roadmap, the campaign plans were to synchronize the efforts of the Secret Service to counter the targets. They were also to identify government and non-government partners for countering investigative targets. In addition, the campaign plans to counter the most significant criminal threats to the financial and payment systems of the United States were to be reviewed, updated, discontinued, or newly developed on an annual basis. The Secret Service has not, however, employed the practices as identified in the Roadmap because, according to Office of Investigations officials, the approach outlined in the Roadmap is not beneficial given the dynamic nature of the crimes they investigate. Instead, rather than identifying investigative targets based on the most significant threats on a yearly basis and developing campaign plans for each target as originally planned, Secret Service officials report that their Global Investigations Operations Center helps identify individual cases with national significance and coordinate resources necessary to investigate these cases throughout the year. In addition, every two weeks Office of Investigations leadership meets with field office management to discuss their significant cases, including discussions about resource demands for these cases. However, available documentation of efforts taken does not consistently demonstrate synchronized efforts across the agency to counter investigative targets, as envisioned in the Roadmap. This is in part because the process for identifying cases with national significance and coordinating related resources is not documented. The Office of Investigations provided us with campaign plans it developed since the Roadmap was released, and based on our review, there were inconsistencies in the type of information provided. For example, one campaign plan identified gas station pumps that may have been compromised by skimming devices—that is, devices that steal credit card related information. The plan also identified field offices responsible for executing investigations of the gas pumps, timeframes for the investigations, and potential partners. A different campaign plan was an informational alert regarding business email compromises, including details about how the attacks are executed and examples of information the attacker is attempting to steal. However, this plan did not identify offices responsible for combatting the attacks, timeframes, or potential partners. The plan also does not specify what resources would be necessary to combat the identified threat. The Roadmap states that fiscal constraints require the Secret Service to prioritize its efforts and take steps to ensure that resources are aligned with its priorities. This is consistent with the recommendation of an independent panel established by the Secretary of Homeland Security to assess the Secret Service’s operations, which in 2014 recommended that the Secret Service “clearly communicate agency priorities, give effect to those priorities through its actions, and align its operations with its priorities.” Further, Standards for Internal Control in the Federal Government require that management should implement control activities through policies and define objectives clearly. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for realizing the achievement. Documenting a process to ensure the Office of Investigations dedicates resources to priority criminal threats can assist the Secret Service in combatting these threats and ensuring that resources align with its priorities. In addition, the documented process can help ensure that plans for addressing priority criminal threats consistently include key information, such as offices responsible for combatting specific priority criminal threats, timeframes for actions to be taken, potential partners, and resources necessary to combat the identified threat. Secret Service Lacks Data to Determine the Level of Resources Dedicated to its Priority Criminal Threats The Roadmap identifies three priority criminal threats to the U.S. financial and payment systems. However, according to Secret Service officials, the agency does not have a process for identifying cases that address priority criminal threats. In addition, the agency does not collect data on the related expended resources, according to Secret Service officials. Secret Service officials told us they maintain a significant case database, which holds information about individual cases that field office management determine to be significant. However, Secret Service officials told us the significant case database does not currently have the capability to identify whether a case addresses one of the three priority criminal threats, and acknowledged that the criteria of a significant case differ from the criteria of a priority threat outlined in the Roadmap. For example, as stated in the significant case database guidance, “significant cases are those that represent a significant economic or community impact, as well as those that involve multi-jurisdictional districts or schemes that employ emerging technologies.” However, as described earlier in this report, the Roadmap identifies three priority criminal threats, one of which is described as “criminal activity, such as cybersecurity threats, that operate at scale and present emergent or systemic risks to financial and payments systems.” Standards for Internal Control in the Federal Government states that relevant, reliable, and timely information is needed throughout an agency in order to achieve its objectives. However, the Secret Service does not have a systematic process for identifying cases that address priority criminal threats or the related expended resources, according to agency officials. As a result, Office of Investigations management and senior Secret Service officials lack complete information on the number of criminal investigations and amount of resources expended agencywide to investigate the agency’s priority criminal threats. Until the agency identifies investigations that address each priority criminal threat and the related resources, Office of Investigations management and senior-level Secret Service officials will not know the extent to which its operations are aligned with the stated priorities. Capturing and analyzing this data could help inform future decisions on how to allocate resources for addressing priority criminal threats. The Office of Investigations Special Agent Staffing Model Does Not Account for Compensation Limits When Estimating Staffing Needs Since 2017, the Office of Investigations has employed a staffing model to determine how many special agents are necessary to sustain protective and investigative operations in its field offices. The staffing model takes into account the number of hours special agents are expected to work under LEAP and standard overtime, but does not consider annual caps on federal employee salaries. According to the Secret Service’s Human Capital Strategic Plan for Fiscal Years 2018 through 2025, the special agent staffing model is used to analyze the protective workload of the field offices. In addition, the plan stated that the model is used to determine the appropriate levels of investigative and intelligence output while keeping travel and overtime at “tolerable levels.” To fulfill the requirements to qualify for LEAP, Secret Service special agents regularly work a 10-hour day, inclusive of 2 hours of LEAP premium pay, for an annual total of 520 hours beyond the standard work year of 2,080 hours. The Office of Investigations staffing model also assumes special agents will work an estimated standard overtime of 200 hours, among other hours. As a result, the staffing model assumes that each special agent will work an estimated 2,600 hours per year. See Figure 8. However, if certain special agents work the hours projected under the staffing model, they may not be compensated for all of their work time because they may exceed the annual caps on federal employee salaries. For example, in calendar year 2018, using the Secret Service’s pay scale for the Washington, D.C. metro area, the standard pay cap was $164,200. Special agents at pay grade GS 13 Step 9 would have lost compensation if, in addition to their regular hours, they worked 520 hours of LEAP and 200 hours of standard overtime (see table 2). Special agents at pay grade GS 14 Step 6 would have lost compensation if, in addition to their regular hours, they worked 520 hours of LEAP alone. Although legislation was enacted in recent years to address compensation for Secret Service special agents by temporarily raising the pay cap, special agents at higher pay levels may still exceed the temporary pay cap under the current staffing model. For instance, under the temporary cap implemented for fiscal years 2017 and 2018, special agents at the GS 15 Step 5 pay grade would have been uncompensated for some hours if they worked the hours projected under the staffing model. See table 2 for additional details. According to data received from the Secret Service, some special agents did work time that was uncompensated despite the pay cap waivers. In calendar years 2016 through 2018, between 8 and 80 special agents assigned to the Office of Investigations worked some hours without being compensated for their time each year. This resulted in more than $1 million in lost wages (see table 4). Without the pay cap waiver, between 426 and 819 special agents would have worked some hours without being compensated for their time, which would have resulted in a total of $15.4 million in lost wages. See Table 3 for more details. Due to the limits on special agent compensation, the Office of Investigation’s special agent staffing model currently plans for individuals to work hours for which they cannot be compensated. Without adjusting its staffing model to ensure compensation limits are accounted for when estimating staffing needs, certain Secret Service special agents will continue to be under-compensated for their work. Additionally, the Secret Service-specific waiver does not apply after 2020, at which point special agents in the Office of Investigations may further exceed the pay caps and work some hours without compensation. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks, such as those related to the management of human capital and the entity’s workforce. Internal control standards also call for the consideration of excessive pressures, noting that excessive pressure can result in personnel “cutting corners” to meet the established goals, and that management can adjust excessive pressures using tools such as rebalancing workloads. The standards further state that management should recruit, develop, and retain competent personnel to achieve the entity’s objectives. Retention can be pursued by, among other things, providing incentives to motivate and reinforce expected levels of performance and desired conduct among staff. Working long hours without being fully compensated may cause special agents to be less focused when providing protection or to seek employment elsewhere. Because the Secret Service’s staffing model does not consider maximum pay cap allowances, the Secret Service will continue to overestimate the number of hours each special agent should work and underestimate the number of staff needed to meet its workload demands. In addition, maximum pay cap allowances are subject to change if legislation does not continue to increase them on an annual basis. As a result, absent developing an updated staffing model that accounts for compensation limits and using that model to estimate staffing needs, the Secret Service risks special agents continuing to work some hours without compensation, and continuing to underestimate staffing needs. Conclusions The Secret Service plays a critical role in safeguarding both the leadership of the United States and its financial resources. The Secret Service’s Office of Investigations provides valuable support to its protective operations, such as by conducting protective intelligence investigations, building special agents’ protection skills, and allowing the agency the flexibility to shift special agents from investigations to protection in campaign years and other protection-heavy periods. However, the Secret Service could better leverage its investigative responsibilities for supporting protective operations by identifying the types of investigative activities that best prepare special agents for protection, and developing a framework to help ensure special agents participate in those activities to the extent possible. In addition, selected federal prosecutors reported that the Secret Service’s financial investigations are helpful to the law enforcement community as a whole, bringing specialized expertise to investigations and complementing investigations performed by other federal law enforcement agencies. However, although the Secret Service has identified priority criminal threats in its Roadmap, it has not employed the actions identified in its Roadmap to pursue these threats. Rather, the agency relies on its Global Investigations Operations Center to identify individual cases with national significance and coordinate resources because, according to current Office of Investigations officials, the approach outlined in the Roadmap is not beneficial given the dynamic nature of the crimes they investigate. Documenting the process of identifying priority criminal threats and developing campaign plans would help the agency better direct investigative resources towards priority criminal threats. In addition, until the Secret Service identifies cases that address priority criminal threats and captures data on resources used, agency management will not be able to determine the extent to which resources and operations are aligned with priority criminal threats. Finally, special agents can work long hours in carrying out their investigative and protective duties. Unless the Secret Service updates its staffing model to account for compensation limits, the agency risks continuing to underestimate staffing needs and having special agents work some hours without compensation. This could affect retention, potentially weakening the agency’s ability to provide the highest level of quality protection. Recommendations for Executive Action We are making the following six recommendations to the Secret Service: The Director of the Secret Service should identify which types of investigations and activities best prepare special agents for protective responsibilities. (Recommendation 1) The Director of the Secret Service should develop a framework to help ensure special agents have an opportunity to work, to the extent possible, investigations and activities that best prepare them for protection. (Recommendation 2) The Director of the Secret Service should establish a documented process to ensure that Office of Investigations resources are aligned with priority criminal threats. The process should outline key information to be included in plans for addressing priority threats. (Recommendation 3) The Director of the Secret Service should identify investigations that address priority criminal threats agencywide and collect data on the resources expended to investigate the threats. (Recommendation 4) The Director of the Secret Service should revise its special agent staffing model to ensure compensation limits are accounted for when estimating staffing needs. (Recommendation 5) The Director of the Secret Service should, after revising the special agent staffing model, use the revised model to recalculate and estimate staffing needs. (Recommendation 6) Agency Comments We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reprinted in appendix IV, and technical comments, which we incorporated as appropriate. In its comments, Secret Service, through DHS, concurred with the six recommendations. In addition, in its written comments the Secret Service outlined steps to address the recommendations. With regard to identifying which types of investigations and activities best prepare special agents for protective responsibilities and establishing a framework to help ensure they have an opportunity to work on them, the Secret Service has established a pilot program to revise guidance on preparing special agents for protection. Upon completion of the pilot program in March 2020, the agency plans to revise a directive to give field office supervisors a framework for identifying key training and experiences to prepare special agents for protection. The agency anticipates the new directive being implemented by June 2020. The stated actions are an appropriate response to our recommendation that the Secret Service develop and implement a framework for preparing special agents for protective responsibilities. These actions, if implemented effectively, should address the intent of our first two recommendations. Regarding the establishment of a documented process to ensure that Office of Investigations resources are aligned with priority criminal threats, the Secret Service plans to replace its current guidance, the INV Priorities and Roadmap, with a new strategic document with the goal of better aligning resources to address priority threats by March 2020. Developing an effective strategic plan that sets goals and objectives and outlines effective and efficient operations necessary to fulfill those objectives is consistent with best practices. Likewise, making clear what information should be included in investigative plans for addressing these priority criminal threats will help the Secret Service ensure that its resources use will be aligned with the criminal threats the agency has identified as priorities. We will continue to monitor the Secret Service’s efforts in this area. To identify investigations that address priority criminal threats across the agency, the Office of Investigations intends to revise its internal policy to further define the role of the Global Investigative Operations Center (GIOC), including how the GIOC will identify and track investigations into priority criminal threats. The agency anticipates that these revisions will be published by March 2020. To collect data on the resources expended to address priority criminal threats, the Office of Investigations plans to consider new and additional data collection methodologies. The agency intends to have developed an analysis of the validity of its revised data aggregation methodology by September 2020. Finally, the Office of Investigations plans to address our recommendations related to its staffing model by working with the Office of Strategic Planning and Policy and the Office of Human Resources to revise the staffing model to ensure compensation limits are accounted for when estimating staffing needs. The Office of Investigations then intends to work with these offices and the Chief Financial Officer to use the revised model to recalculate staffing needs. As the Secret Service notes, this recalculation is likely to result in an increase to the number of special agents required for the agency to maintain its current level of investigative engagement. The agency intends to complete the revision of the staffing model by March 2020 and update staffing estimates by June 2020. We also provided the report to the Department of Justice (DOJ). The Executive Office of U.S. Attorneys (EOUSA), a component of the Department of Justice, provided written comments, which are reprinted in appendix IV. In its response, EOUSA, noted that it agreed with our statements that Secret Service is a valuable law enforcement partner in criminal investigations, particularly those related to counterfeit currency, cyber fraud, and identity theft. EOUSA further emphasized that Secret Service’s investigative mission is intrinsically valuable to federal law enforcement efforts. DOJ also provided technical comments, which we incorporated as appropriate. Finally, we provided the report to the Internal Revenue Service, which did not provide comments on the report. The U.S. Postal Service declined to review the public version of the report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General of the United States, the Postmaster General of the United States, and the Commissioner of the Internal Revenue Service, as well as other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or GoodwinG@gao.gov. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report addresses the following objectives: (1) how, if at all, do the U.S. Secret Service’s (Secret Service) investigative operations support or negatively affect its protective operations; (2) to what extent do the Secret Service and selected federal entities investigate similar financial crimes, and to what extent do selected federal prosecutors find this to be beneficial; (3) to what extent has the Secret Service developed a plan to combat its priority criminal threats; and (4) to what extent does the Office of Investigations’ staffing model ensures compensation limits are accounted for when estimating staffing needs. This is a public version of a sensitive GAO report that we issued in September 2019. Secret Service deemed some of the information in our September report as sensitive, which must be protected from public disclosure. Therefore, this report omits sensitive information on whether Secret Service’s investigative operations negatively affect its protective operations. Although the information provided in this report is more limited, the report addresses the same objectives as the sensitive report and uses the same methodology. To determine how the Secret Service’s investigative operations potentially support or negatively affect protective operations, we reviewed Secret Service policies and guidance, including those related to Office of Investigations roles and responsibilities, time and attendance, and training. For example, we reviewed the Secret Service’s December 2017 Office of Investigations Priorities and Roadmap (Roadmap) to assess whether the agency is leveraging the expertise it has developed for investigative purposes to advance special agents’ ability to perform protective responsibilities. We also analyzed Secret Service data for fiscal years 2014 through 2018. For example, we analyzed Secret Service time and attendance data to determine the number of hours special agents spent on investigation and protection activities. We focused on special agents in the Office of Investigations, as these personnel are responsible for conducting criminal investigations and temporary protective assignments. Further, the data we analyzed focused on special agents in a field location (e.g., field office or resident office), and thus did not include special agents at headquarters. We focused on field staff because that is how the agency captures and reports the hour-related data in its annual reporting. In addition, we analyzed data on the number of investigative cases opened and closed. We focused on fiscal years 2014 through 2018 as it was the most recent data available at the time of our review; included a fiscal year in which the Secret Service experienced the operational tempo of a presidential campaign (i.e., fiscal year 2016); and included data from two administrations. To assess the reliability of the data, we discussed with Secret Service officials how the data are entered and maintained in their Manhours Reporting System, which tracks special agent workload and tasks, and their Field Investigative Reporting System, which maintains data on field office staffing and investigations. In addition, we compared the data to recent Secret Service annual reports and congressional budget justifications, and inquired about any differences. We also reviewed the data for any obvious errors and anomalies. Based on our review of the data and related controls, we determined that the data were sufficiently reliable for the purposes of reporting the number of hours that special agents in the Office of Investigations expended on different activities and the number of cases opened and closed during fiscal years 2014 through 2018. We also interviewed Secret Service officials at headquarters and selected field offices. We selected office locations using the following criteria: highest number of criminal investigation and protection hours, diversity in types of offices, geographic diversity, and presence of other federal law enforcement agencies. In addition, we conducted semi-structured interviews with 40 current and former Secret Service special agents. Specifically, we randomly selected and interviewed 10 special agents from each of the Secret Service’s three career phases (30 special agents in total). We also interviewed 10 former special agents, including those that retired from the Secret Service and others that left the agency for other reasons. To select these 10 special agents, we asked special agents that we interviewed to recommend former special agents to participate in our study (i.e., snowball sampling) and contacted an association for former Secret Service personnel to help identify recently retired special agents. The information obtained from our interviews cannot be generalized across all current and former special agents; however, the information provided examples and perspectives on how investigative operations can support and negatively affect protective operations. To determine the extent to which the Secret Service and selected federal agencies conduct similar investigations, we analyzed federal prosecutor data from the Legal Information Office Network System (LIONS)—a system maintained by the Department of Justice’s Executive Office for United States Attorneys. We analyzed the data to determine the number and types of cases referred by the Secret Service during fiscal years 2013 through 2017, the latest years for which data was available when making the determination. Specifically, based on our data analyses, we identified the six LIONS categories wherein Secret Service (1) was identified as the lead investigative agency by the US Attorney’s Office and (2) referred the highest number of financial crime cases to federal prosecutors during fiscal years 2013 through 2017. The categories were counterfeiting and forgery, other white collar crime/fraud, financial institution fraud, identity theft, aggravated identity theft, and other fraud against businesses. Next, we identified federal law enforcement agencies that referred the highest number of cases in these categories. Based on our data analyses, we selected the following four law enforcement agencies: the Federal Bureau of Investigation (FBI), the U.S. Postal Inspection Service (USPIS), Homeland Security Investigations (HSI), and Internal Revenue Service – Criminal Investigation (IRS-CI). In the course of our investigation, data from fiscal year 2018 became available, and we analyzed data from fiscal years 2014 through 2018 to determine the extent to which our selected federal law enforcement agencies referred similar types of cases to U.S. Attorney’s Offices as those referred by Secret Service. The information obtained from selected federal agencies cannot be generalized across all federal agencies. However, the information provides examples of how federal law enforcement agencies can conduct similar types of investigations. In addition, the data may not account for all financial crimes cases each agency contributed investigative resources to. This is because the data only includes cases referred by each investigative agency wherein the agency was identified as the lead investigative agency as determined by the U.S. Attorneys who entered the data into LIONS. To assess the reliability of the LIONS data, we discussed with Department of Justice officials how the data are entered and maintained in the system. We also reviewed the data for any obvious errors and anomalies. Based on our reviews and discussions, we determined that the data were sufficiently reliable for the purposes of describing the extent that selected federal law enforcement agencies referred financial crimes cases to federal prosecutors similar to those referred by the Secret Service during fiscal years 2014 through 2018. To help identify potential benefits and drawbacks of the Secret Service and selected federal agencies conducting similar types of investigations, we conducted interviews with officials from the selected federal agencies. Specifically, we interviewed officials at the headquarters and the Miami and New York field office locations for each selected agency in conjunction with site visits to Secret Service field offices in those areas. In addition, we conducted semi-structured interviews with one representative with a high-level understanding of the office’s activities (e.g., criminal chief) at 12 U.S. Attorney Offices (USAO). To select U.S. attorney districts, we established the following criteria to help ensure that we gathered a range of perspectives and interviewed USAOs that were likely to have experience working with Secret Service: highest number of ongoing cases of the types Secret Service investigates the most during fiscal years 2013 through 2017, size of USAO district (as designated by the Department of Justice), geographic diversity, and USAOs located in a state with a Secret Service field office. The information obtained from selected USAOs cannot be generalized across all federal prosecutors; however, the information provided examples of the benefits and drawbacks of selected federal agencies and the Secret Service conducting similar types of investigations. To determine the extent to which the Secret Service has developed a plan to combat its priority criminal threats, we reviewed Office of Investigations policies and guidance. For example, we reviewed the December 2017 Roadmap and guidance related to the Secret Service’s Significant Case Database. In addition, as discussed earlier, we interviewed officials from the Office of Investigations at Secret Service’s headquarters and selected field offices. We held discussions with agency officials to better understand whether the agency had a plan to address priority criminal threats and whether it maintained data on the number of cases that addressed priority criminal threats in fiscal years 2014 through 2018. We also reviewed Standards for Internal Control in the Federal Government to assess whether the Secret Service has the necessary control activities and information to combat its priority criminal threats and carry out its responsibilities. Finally, to understand how the Office of Investigations develops and uses its staffing model, we reviewed agency guidance documents including guidance governing personnel utilization; the Secret Service human resources manual; and the fiscal years 2018-2025 human capital strategic plan. We also received a briefing on the development and use of the Office of Investigations staffing model and the assumptions and statistical methods used in the staffing model from officials in the Office of Investigations. To describe the ways in which federal law affects special agent pay, we reviewed federal laws, such as the Law Enforcement Availability Pay Act of 1994, the Overtime Pay for Protective Services Act of 2016, and the Secret Service Recruitment and Retention Act of 2018. Finally, we reviewed data provided by the Office of Human Resources to determine the number of special agents assigned to the Office of Investigations in calendar years 2016 through 2018 that were not compensated for all the time worked in each calendar year and the total sum unpaid. We determined the data were reliable for the purposes of this report through interviews with officials and evaluations of the system from which the data was pulled. We also reviewed Standards for Internal Controls and previous GAO products to assess the potential effects of some special agents working without compensation. We conducted this performance audit from November 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with Secret Service from October 2019 to January 2020 to prepare this version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. Appendix II: U.S. Secret Service Expenditures for Fiscal Years 2014 through 2018 From fiscal years 2014 through 2018, the U.S. Secret Service (Secret Service) expended $9.2 billion, with an average of $1.8 billion per fiscal year. Secret Service officials told us that in fiscal years 2017 and 2018, the Secret Service changed the way it collected and reported expenditure data. Specifically, Department of Homeland Security management directed all agency components to use the Common Appropriations Structure (CAS). As a result, the Secret Service implemented CAS in fiscal year 2017. In addition, the officials told us the Secret Service updated its accounting software in fiscal year 2018, resulting in additional changes to the accounting structure. Secret Service officials told us that because of these changes, it is not possible to accurately compare expenditure data across fiscal years 2014 through 2018. However, Secret Service officials noted that in the future they will be able compare year- over-year fiscal data starting with fiscal year 2018 and beyond using a tool within the new accounting system. A description of the expenditure data for fiscal years 2014 through 2018 is provided below. Secret Service officials told us that in fiscal years 2014 through 2016, expenditure data was collected and reported according to the task being performed. For example, a special agent’s salary was reported under the investigation category if the special agent was performing investigation- related tasks, and it was reported under the protection category if the special agent was performing protection-related tasks. See table 4. According to Secret Service officials, in fiscal year 2017, the agency implemented CAS and began to collect and report expenditure data according to location. For example, a special agent’s salary was reported under the investigation category if the special agent was assigned to an Office of Investigations field office even if the special agent was performing a protection-related task. See table 5. In fiscal year 2018, Secret Service transferred its financial reporting to the Oracle R12 system, which tracks data according to both location and task. In addition, officials noted that other accounting structure changes were made in 2018, such as changes to what activities were classified as protection. As a result, expenditures data from fiscal year 2018 is not comparable to fiscal years 2014 through 2017. See table 6. Appendix III: Enactment of the U.S. Secret Service’s Investigative and Protective Duties under 18 U.S.C. § 3056 In 1865, the Secret Service was established by the Secretary of the Treasury for the purpose of investigating the counterfeiting of U.S. currency. Over the course of the next 50 years, the Secret Service’s role within the department continued to evolve as additional duties, such as Presidential protection, were assigned to it. During this time, the authorities exercised by the Secret Service were those delegated to it within the Department of the Treasury and, on occasion, authorities enacted through annual appropriations, which expired at the end of the applicable fiscal year. In 1916, the Secret Service received its first grant of authority enacted by permanent legislation— the Federal Farm Loan Act—which authorized the Secret Service to investigate counterfeiting, embezzlement, fraud, and certain other offenses in the federal farm loan system. Ten years later, the Secret Service received another grant of authority to investigate the counterfeiting of government requests for transportation by common carrier. Later, the Banking Act of 1933 and its 1935 amendments charged the Secret Service with investigating offenses similar to those under the Federal Farm Loan Act, but as applied to the Federal Deposit Insurance Corporation (FDIC). In 1948, the Secret Service’s investigative authorities under the above statutes were consolidated into a single provision of law, 18 U.S.C. § 3056 (“the Secret Service Statute”). However, the 1948 codification effort did not account for the investigative or protective activities that the Secret Service was authorized to perform under a delegation of authority or annual appropriations acts. The authorizing legislation for these activities came three year later, with the 1951 revision of the Secret Service Statute. As originally enacted, the Secret Service’s protective duties extended to the President and his immediate family, the President- elect, and, upon request, the Vice President. On the investigative side, the 1951 statute authorized the Secret Service to investigate any federal offense related to U.S. or foreign coins, obligations and securities, thereby expanding its jurisdiction beyond the enumerated offenses enacted in 1948. Over the next three decades, a series of amendments to the Secret Service Statute added new investigative and protective duties. In 1984, a revised version of the Secret Service Statute was enacted, which incorporated all prior amendments while adding a new investigative responsibility. Although there has not been another wholesale revision of the Secret Service Statute since 1984, subsequent amendments have further increased the Secret Service’s protective and investigative responsibilities. Under the current codification of its primary protective authorities, 18 U.S.C. § 3056(a), the Secret Service protects the President, the Vice President, the President-elect, and the Vice President-elect. The Secret Service may also provide protection, unless declined, to the immediate families of the President, the Vice President, the President-elect, and the Vice President-elect; former Presidents and their spouses for their lifetimes (unless the spouse remarries); children of a former President who are under 16 years of age; visiting heads of foreign states or foreign governments; other distinguished foreign visitors to the United States and official representatives of the United States performing special missions abroad when the President directs that such protection be provided; major Presidential and Vice Presidential candidates and, within 120 days of the general Presidential election, the spouses of such candidates; and, finally, former Vice Presidents, their spouses, and their children who are under 16 years of age, for a period of not more than six months after the date the former Vice President leaves office. Under the current codification of its primary investigative authorities, 18 U.S.C. § 3056(b), the Secret Service conducts criminal investigations in areas such as financial crimes, identity theft, counterfeiting of U.S. currency, computer fraud, computer-based attacks on banking, financial, and telecommunications infrastructure, and a wide range of financial and cybercrimes. In addition to investigating financial and electronic crimes, special agents conduct protective intelligence—investigating threats against protected persons, including the President, and protected facilities, such as protected residences. Table 7 provides a chronology of key statutes enacting protective and investigative authorities under the Secret Service Statute, 18 U.S.C. § 3056. Table 8 provides a cross-reference to enumerated offenses within the Secret Service’s investigative jurisdiction under 18 U.S.C. § 3056(b)(1) of the Secret Service Statute. “the Secret Service is authorized to detect and arrest any person who violates . . . section 508, 509, 510, 871, or 879 of this title or, with respect to the Federal Deposit Insurance Corporation, Federal land banks, and Federal land bank associations, section 213, 216, 433, 493, 657, 709, 1006, 1007, 1011, 1013, 1014, 1907, or 1909 of this title.” The enumerated offenses generally involve fraud, counterfeiting, embezzlement, and certain other misconduct in connection with government transportation requests, federal farm loans, and the Federal Deposit Insurance Corporation. Table 8 provides a brief description of each of the cited offenses. Appendix IV: Comments from the Department of Homeland Security Appendix V: Comments from the Department of Justice Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Joseph P. Cruz (Assistant Director), Jeffrey Fiore, Miriam Hill, Lerone Reid, and Leslie Stubbs made key contributions to this report. Also contributing to this report were Willie Commons III, Christine Davis, Eric Hauswirth, Susan Hsu, Grant Mallie, Claire Peachey, Farrah Stone, Eric Warren, and Sonya Vartivarian.
Why GAO Did This Study Commonly known for protecting the President, the Secret Service also investigates financial and electronic crimes (e.g., counterfeit currency and identity theft). In recent years, Congress and a panel of experts established by the Secretary of Homeland Security have raised concerns that the Secret Service's investigative operations may negatively affect its protective operations. GAO was asked to review the Secret Service's investigative operations. This report examines, among other things, the extent to which the Secret Service's (1) investigative operations support or negatively affect its protective operations; (2) Office of Investigations has developed a plan to combat its priority criminal threats; and (3) staffing model accounts for federal employee compensation limits. GAO analyzed Secret Service data related to investigation and protection activities from 2014 through 2018; conducted semi-structured interviews with current and former special agents and federal prosecutors; and reviewed Secret Service policies and guidance. This is a public version of a sensitive report that GAO issued in September 2019. Information that the Secret Service deemed sensitive has been omitted. What GAO Found The operations of the U.S. Secret Service (Secret Service) Office of Investigations, which conducts criminal investigations into financial and electronic crimes, generally support Secret Service protective operations in a variety of ways. For example, special agents in the Office of Investigations perform temporary protective assignments, such as during presidential campaigns or augment protective operations by securing a site in advance of a visit by a protectee. GAO found that personnel in the Office of Investigations spent 11.2 million hours supporting protective operations from fiscal years 2014 through 2018. Most of the 40 current and former special agents GAO interviewed said that their investigative duties did not negatively affect protection. However, over half identified that they were frequently or sometimes required to work on investigations while assigned to temporary protective operations. Details associated with this topic are sensitive and have been omitted from this report. In December 2017, the Secret Service developed a plan to align its resources to combat what it identified as priority criminal threats (e.g., criminal activity with significant economic and financial impacts). However, available documentation of efforts taken does not consistently demonstrate synchronized efforts across the agency to counter the priority criminal threats, as envisioned in the plan. Further, the Secret Service does not have a systematic approach for identifying cases that address priority criminal threats. Absent a documented process for aligning resources and identifying cases, Secret Service will continue to lack assurance that its resources are aligned to combat its priority threats. The Office of Investigations employs a staffing model to determine how many special agents are needed in its field offices. The staffing model takes into account the number of law enforcement premium pay and standard overtime hours special agents are expected to work. However, it does not consider annual caps on federal employee salaries. As a result, the agency may be underestimating the number of staff needed to meet its workload demands. What GAO Recommends GAO is making six recommendations, including that the Secret Service establish a documented process to ensure that resources are dedicated to priority criminal threats, identify investigations that address these threats, and ensure compensation limits are accounted for when estimating staffing needs. The Department of Homeland Security concurred with each of GAO's recommendations.
gao_GAO-20-473
gao_GAO-20-473_0
Background VHA manages one of the largest health care delivery systems in the United States and is responsible for overseeing the provision of health care at VA medical facilities. VA relies on its EHR system—VistA—to document the delivery of health care services to veterans. VA’s VistA EHR System To facilitate care, clinical providers access patient medical records and document the care they provide in EHR systems. Patient information needs to be accessible and consistent to prevent risks to patients’ safety, particularly when shared between providers. Information that is electronically exchanged from one provider to another must adhere to the same standards to be consistently interpreted and used in EHRs. In our prior work, we found that EHR technology has the potential to improve the quality of care that patients receive and to reduce health care costs. VistA has served as VA’s EHR system for more than 30 years. Over the last several decades, it has evolved into a technically complex system that comprises about 170 modules that support health care delivery at more than 1,500 medical facilities. In addition, customization of VistA, such as changes to the modules by the various medical facilities, has resulted in approximately 130 versions of the system VA-wide. Furthermore, as we have reported, VistA is costly to maintain and does not fully support VA’s need to electronically exchange health records with other organizations, such as DOD and community providers. VA and DOD have historically operated separate EHR systems. In addition to patient data from its own EHR system, VA relies on patient data from DOD to help ensure that it has access to the necessary health information that could assist clinicians in making informed decisions to provide care to service members transitioning from DOD to VA’s health care system. We have previously reported on VA’s challenges in managing health information technology and modernizing VistA. In 2015, we designated VA health care as a high-risk area for the federal government, in part due to its information technology challenges. Specifically, we identified limitations in the capacity of VA’s existing information technology systems, including the outdated, inefficient nature of key systems and a lack of system interoperability, as contributors to the department’s challenges related to health care. In our 2019 update to the high-risk series, we stressed that VA should demonstrate commitment to addressing its information technology challenges by stabilizing senior leadership, building capacity, and finalizing its action plan for addressing our recommendations, and by establishing metrics and mechanisms for assessing and reporting progress. We also have issued numerous reports over the last decade that highlighted the challenges facing VA in modernizing VistA and improving EHR interoperability with DOD. EHR Modernization Efforts, Including Goals of Improved Sharing of Health Information between VA and DOD VA created the Office of Electronic Health Record Modernization in 2018 to lead its EHRM program effort, which was intended to result in a more modern EHR system that would improve providers’ ability to deliver care, and share health data, including between VA and DOD and between VA and community providers. For example, with improved interoperability, medical providers would have the ability to query data from other sources while managing chronically ill patients, regardless of geography, or the network on which the data reside. In June 2017, the VA Secretary at the time announced that the department planned to acquire and configure the same EHR system that DOD is currently implementing across the military health system. According to the VA Secretary, the department decided to acquire the same system as DOD because it would allow all of VA’s and DOD’s patient data to reside in one system, thus assisting the departments in their goals of enabling seamless care between VA and DOD without the exchange and reconciliation of data between two separate systems. As VA planned to implement the same system DOD is implementing, experts recommended that VA and DOD coordinate to ensure that the departments could leverage efficiencies and minimize variation between the departments’ EHR system configurations when practical. DOD’s initial implementation of the Cerner EHR system occurred between February and October 2017 at four military treatment facilities in the state of Washington. In September 2019, the system was implemented at four additional military treatment facilities in California and Idaho. DOD plans to continue to implement the EHR system in 23 phases through 2023 with the next implementation expected to take place at eight additional military treatment facilities in California and Nevada. EHR System’s Implementation Timeline VA’s EHRM program originally planned to implement the Cerner EHR system at two VA medical facilities in spring 2020 with a phased implementation of the remaining facilities over the next decade. The EHRM program chose the Mann-Grandstaff VA Medical Center in Spokane, Wash. and the VA Puget Sound Health Care System in Seattle, Wash. as its initial operating capability sites. Information gathered from these sites will be used to help VA make EHR system configuration decisions and standardize work processes for future locations where the commercial EHR system will be implemented. In August 2019, the EHRM program adjusted its schedule to implement the commercial EHR system at these two sites in two phases, known as capability sets 1 and 2: Capability set 1 includes key EHR functionalities necessary to implement the system at the Mann-Grandstaff VA Medical Center, a level 3—that is, less complex—facility. Capability set 1 was originally scheduled for implementation in March 2020. Capability set 2 includes remaining functionalities necessary to implement the system at the VA Puget Sound Health Care System, a level 1—that is, highly complex—facility, in the fall of 2020. In February 2020, VA postponed the implementation of the Cerner EHR system at the Mann-Grandstaff VA Medical Center until July 2020. According to VA officials, the additional time will allow Cerner to develop and establish a more complete and robust training environment, as requested by VHA clinicians and other facility staff. In addition, according to VA EHRM program officials, the implementation delay will allow VA and Cerner to have time to develop additional interfaces between the Cerner EHR system and other VA systems, such as VA’s mail-order pharmacy system. These officials told us that the delayed implementation of the Cerner EHR system at the Mann-Grandstaff VA Medical Center was not expected to impact VA’s timeline for implementing the EHR system at the VA Puget Sound Health Care System in the fall of 2020. In April 2020, the VA Secretary announced that the department had shifted priorities to focus on caring for veterans in response to the pandemic created by the Coronavirus Disease 2019 (COVID-19). Further, the Secretary directed the EHRM program to allow clinicians who had been participating in EHRM program activities to focus on caring for veterans. According to program officials, they paused the implementation of the EHR system and were assessing the impact of the COVID-19 pandemic on VA’s planned implementation schedule. VA Used a Multi-Step Process to Make EHR Configuration Decisions and Assess System Compatibility VA’s EHRM program used a multi-step process to make EHR system configuration decisions for the Cerner EHR system being implemented at the VA Mann-Grandstaff Medical Center and Puget Sound Health Care System. This process included forming EHR councils and convening these councils at national and local workshops to make configuration decisions used by VA’s contractor, Cerner, to configure the new EHR system. The EHR councils also assessed the compatibility of the EHR system with the processes VA clinicians and staff follow in delivering care. EHR councils. In fall 2018, VA’s EHRM program established 18 EHR councils, based upon specific clinical and administrative areas, to make VA-specific EHR system configuration decisions for these areas. Each EHR council included subject-matter experts from VA, such as health care providers in various clinical areas and other staff, as well as non-VA participants from DOD and Cerner. According to VA EHRM program officials, Cerner’s typical process for configuring its EHR system was modified to accommodate VA’s needs, which VA officials stated were more complex than those of Cerner’s commercial clients. According to Cerner officials, Cerner does not typically establish councils as part of its EHR system configuration process. National workshops. VA’s EHRM program planned and held eight national workshops from November 2018 to October 2019, during which members of all 18 EHR councils met to make standardized EHR system configuration decisions for the VA health care system. VA’s EHRM program utilized DOD’s version of the Cerner EHR system—MHS Genesis—as its starting point for the EHR system configuration process. During the workshops, Cerner assigned consultants to facilitate these workshops, who highlighted Cerner’s commercial best practices and prepared workflow designs, according to VA EHRM program and Cerner officials; facilitated EHR system configuration decision discussions and noted input from EHR council members and other session participants such as DOD representatives; held sessions that involved members from different EHR councils for system configuration decisions that required coordination between councils. For example, the Business Operations Council and the Ambulatory Council held joint sessions to address scheduling appointments for oncology patients; was responsible for identifying and documenting recommendations for EHR system configuration decision differences between VA sites, and each medical facility specialty/department; and provided weekly progress updates to VA that reflected overall progress of expected decisions to be completed compared to the actual approved EHR system configuration decisions during national workshops. Over the course of the eight national workshops, EHR council members were responsible for making EHR system configuration decisions in given clinical and administrative areas and communicating them to Cerner; providing progress updates to VA’s EHRM program and VA leadership; and notifying appropriate governing bodies (e.g., VHA program offices— such as the Office of Primary Care) of any local, state, federal, VISN, and department policies that impact configuration decisions. More specifically, each council discussed VA’s work processes and documented relevant information that informed the configuration of the EHR system, including: (1) “workflows”—”process maps” that capture the start-to-finish sequence and interactions of related steps, activities, or tasks for each work process that VA medical facilities follow. For example, VA has a medication administration workflow for describing the sequence of tasks needed for scanning a patient’s wristband and administering medication. (See fig. 1.) (2) “design decision matrices,” which are compilations of decisions and discussion topics that identify and resolve workflow questions to inform configuration decisions and support implementation of the EHR system. For example, the medication administration design decision matrix documents that clinicians should not be prevented from proceeding with medication administration if a patient’s wristband cannot be scanned. (See fig. 2.) (3) “data collection workbooks,” which capture all of the data needed to inform how the EHR system should be configured to support each workflow, such as user privileges and preferences. For example, a data collection workbook for medication administration includes data on user preferences and prescribing privileges. (See fig. 3.) The EHR system configuration decisions each council needed to make varied significantly in quantity and topic. For example, the Ambulatory Council, charged with focusing on primary care decisions, had over 200 EHR system configuration decisions to make, while the Behavioral Health Council had about 100. Once configuration decisions were made, the EHR councils assessed the compatibility of the configuration of the Cerner EHR system with VA work processes. To do so, VA’s EHR councils reviewed the capabilities of the system and identified work processes that the Cerner EHR system did not support (or only partially supported). For example, according to VA Mann- Grandstaff Medical Center staff, the Cerner EHR system did not originally interface with VA’s Patient Centered Management Module, which supports VA’s work processes for establishing provider-patient relationships. However, in March 2020, VA EHRM officials told us that the interface between the two systems would be available when the Cerner EHR system is implemented at the Mann-Grandstaff VA Medical Center, which was planned for July 2020. In addition, according to VA EHRM officials, Cerner is in the process of developing EHR system capabilities for prosthetics to support VA work processes. Furthermore, according to VA EHRM officials, Cerner has been documenting and tracking needed capabilities for EHR implementation and updating VA’s EHRM program accordingly. According to EHRM program officials, Cerner plans to include functionalities not available in capability set 1 in either capability set 2 or future capability sets, although the development of these capabilities is an ongoing process. Although the eight national workshops have concluded, since October 2019, these EHR councils have continued to meet as necessary, virtually, and in person, to complete capability set 1 and 2 configuration decisions. According to Mann-Grandstaff VA Medical Center staff, as of February 2020, VA still needed to make EHR system configuration decisions to address online prescription refills and assigning patients to primary care panels. Local Workshops. After standardized EHR system configuration decisions were made at the national workshops, they were reviewed at local workshops for site-specific needs. To do this, from December 2018 to October 2019, VA’s EHRM program held eight local workshops at each of the initial operating capability sites—the Mann-Grandstaff VA Medical Center and the VA Puget Sound Health Care System. Local workshops allowed VA and Cerner to identify variances from standardized EHR system configuration decisions made at the national workshops as well as manual processes that needed to be accounted for at local medical facilities. If variances were identified, Cerner reported them to the appropriate EHR councils. While VA tried to minimize the variances in system configuration decisions, in certain cases, necessary alternatives to these configuration decisions were approved for local medical facilities if practicable. For example, according to a Cerner official, the national emergency room triage workflow originally called for an emergency department registrar to register a patient; in response to input from a local workshop, VA developed an alternative workflow, in which an emergency department registered nurse completes the step if a VA facility does not have an emergency department registrar. If there were no variances, EHR system configuration decisions were approved and reported to Cerner to configure the EHR system. According to EHRM program officials, VA plans to hold local workshops in advance of the Cerner EHR system implementation at future VA medical facilities to focus on site-specific configuration decisions. Cerner will continue to facilitate these future local workshop sessions and configure the EHR system based on decisions made at these sessions. Figure 4 provides an overview of the EHR councils’ process for making system configuration decisions. VA Met Its Schedule for Making Initial EHR System Configuration Decisions, and Has Formulated a Schedule for Remaining Efforts VA Met Its Schedule for Making System Configuration Decisions for Capability Set 1 VA met its schedule for making EHR system configuration decisions for capability set 1, which was scheduled for initial implementation at the Mann-Grandstaff VA Medical Center in July 2020. In addition, VA has formulated a schedule for remaining EHR system configuration decisions for capability set 2, which it planned to implement at the VA Puget Sound Health Care System in the fall of 2020. Our review of VA progress data shows that VA met the schedule for making EHR system configuration decisions it had established, which required VA’s 18 EHR councils to make at least 70 percent of decisions needed for capability set 1 by October 18, 2019. An EHRM program official stated that this threshold was required to enable Cerner to configure the EHR system for the Mann-Grandstaff VA Medical Center in anticipation of the system’s initial implementation. According to VA’s progress data, collectively, the 18 EHR councils met the requirement to make at least 70 percent of their total expected EHR system configuration decisions for capability set 1. Specifically, as of early November 2019, VA data for EHR configuration decisions needed for capability set 1 indicated that the EHRM program had developed: 877 of 966 (or 91 percent) of workflows; 1,397 of 1,412 (or 99 percent) of design decision matrices; and 1,364 of 1,610 (or 90 percent) of data collection workbooks. After the EHR councils collectively met VA’s goal to make 70 percent of EHR system configuration decisions by October 18, 2019, efforts continued to make the remaining decisions for capability set 1. In March 2020, VA data indicated that, combined, the EHR councils had developed an additional: 9 percent of workflows—874 of 878 (or nearly 100 percent); 1 percent of design decision matrices—1,459 of 1,467 (or nearly 100 10 percent of data collection workbooks—1,746 of 1,751 (or nearly 100 percent). (See Appendix I for additional details on specific changes from November 2019 to March 2020 by EHR councils.) As noted earlier, though the workshop process has concluded, a VA EHRM program official stated that they had plans to hold virtual—over teleconference or videoconference—meetings to allow the EHR councils to make remaining EHR system configuration decisions for capability set 1 at the Mann-Grandstaff VA Medical Center, by March 2020. VA Has Formulated a Schedule for Capability Set 2 Configuration Decisions VA’s EHRM program has formulated a schedule for making EHR system configuration decisions for capability set 2, which are necessary to support the implementation of the Cerner EHR system at the VA Puget Sound Health Care System planned for the fall of 2020. Specifically, VA’s EHRM program is continuing to make EHR system configuration decisions outside of the workshop process, which concluded in October 2019. Currently, EHRM program officials have plans to hold smaller meetings, about a fourth of the size of the national workshops, to make EHR configuration decisions that require input from multiple councils for capability set 2. According to EHRM program officials, the program set a goal of developing capability set 2 workflows, design decision matrices, and data collection workbooks by May 2020 so that the EHR councils could start validating the system configuration decisions at that time. EHRM program officials anticipate that this schedule for capability set 2 gives Cerner enough time to configure the EHR system and establish a training environment to enable implementation of the EHR system at the VA Puget Sound Health Care System planned for the fall of 2020. According to program officials, capability set 2 is composed of about 90 percent of configuration decisions for capability set 1 and 10 percent of additional workflows and data collection workbooks. These officials also told us that, as part of the process of making capability set 2 configuration decisions, they would determine the effectiveness of these decisions based on the implementation of capability set 1 at the Mann-Grandstaff VA Medical Center and make any necessary changes. VA’s Decision-Making Procedures Were Generally Effective, but Key Stakeholders Were Not Always Included VA’s EHRM program established EHR council decision-making procedures that were generally effective. In addition, the councils included a wide range of stakeholders, in terms of geographic representation and representation from VA central office, VISNs, and medical facilities. However, according to EHR council participants, VA did not always ensure adequate representation at local workshops. VA’s EHRM Program’s Decision-Making Procedures for EHR Councils Were Generally Effective VA’s EHRM program’s decision-making procedures for the EHR councils were generally effective as demonstrated by adherence to applicable federal standards for internal control. According to these standards, management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. In addition, according to our leading collaboration practices, clarity can come from agencies working together to define and agree on their respective roles and responsibilities and participating agencies should document their agreement. VA’s EHRM program established the organizational structure, assigned responsibility, and delegated authority for system configuration decisions to the EHR councils. Specifically, the EHRM program developed a charter for the councils that outlined each council chair’s responsibility for managing council membership and ensuring it is consistent with guidelines for broad representation; outlined council member roles and responsibilities, such as participating in face-to-face meetings and conferences, providing subject matter expertise, and guiding EHR system configuration decisions; and delegated authority for EHR system configuration decisions from the EHRM Chief Medical Officer to the council chair and members. According to EHRM program documentation, VA established decision- making authority at the lowest level possible, beginning with the EHR councils, to ensure timely and appropriate decision-making. Based on our observations of national council workshop meetings, if a council had questions that involved coordination with another council, the Cerner consultant present would take note of the issue and coordinate a meeting with the relevant councils to discuss the issue. For example, participants from the Ambulatory Council met with participants from the Rehabilitation and Acute Clinical Ancillaries Council to discuss the EHR system configuration decisions for ordering glasses and contacts. Based on our review of the Functional Governance Board charter and meeting minutes, when a decision required coordination and could not be made at the EHR council level, it was identified and escalated to the Functional Governance Board. The Functional Governance Board provided guidance on addressing issues or, in turn, escalated unresolved issues to the higher-level Governance Integration Board, or if appropriate, to a joint VA and DOD coordination process. According to EHRM program officials, as of February 2020, there were no issues escalated from the Functional Governance Board to the Governance Integration Board because the council governance structure strived to make decisions at the lowest level possible. Figure 5 provides an overview of the EHRM program’s decision-making procedures. With respect to collaboration, because VA is using the same system as DOD, VA has had to coordinate with DOD on some decisions. Although both departments have procedures for configuring the Cerner EHR system for their individual needs, VA EHRM program officials noted the importance of coordinating to design a system that would allow sharing of information and tasks between VA and DOD. According to VA EHRM program officials, for example, VA and DOD coordination is necessary for workflows pertaining to durable orders for life-sustaining treatments—medical treatments intended to prolong the life of a patient who would die soon without the treatment (e.g., artificial nutrition and hydration, and mechanical ventilation). VA and DOD’s practices differed on how to address such treatment, and Cerner’s process did not accommodate VA’s need to maintain durable orders across patient encounters, so they would not need to be re-written every time a patient changed care setting or location. VA requested changes to the Cerner EHR system to allow it to continue to follow its current process for documenting life-sustaining treatments, but according to DOD officials, the proposed changes did not align with DOD’s position on such treatments, specifically resuscitation statuses. After multiple discussions between the VA and DOD clinicians, the two departments plan to adopt an interim solution. According to VA and DOD officials, VA and DOD’s joint decision-making body, the Functional Decision Group, has met weekly to address coordination issues since early 2019. These officials said that the joint Functional Decision Group determined whether it could make a decision, or whether additional information was needed and a team should be established to work on dispute resolution between the departments. VA EHRM program officials said that the coordination procedures for the joint Functional Decision Group would be formalized and that the roles and responsibilities for coordination between VA and DOD would be clearly defined, in response to a recommendation we made in a previous report. Specifically, VA and DOD have developed a charter for the joint Functional Decision Group, which was signed in April 2020. According to EHR council participants, VA and DOD had been developing their coordination procedures as system configuration decisions were made, and decisions that required input from both departments may not have been as timely as they could have been. According to EHRM program officials, the departments ultimately were able to address most decisions and coordination on remaining decisions was ongoing as of March 2020. VA’s EHRM Program Included a Wide Range of Participants at National and Local Workshops, but Did Not Always Ensure the Involvement of Key Stakeholders VA’s EHRM Program Largely Met EHR Council Charter Goals for Representation VA generally included a wide range of stakeholders in its 18 EHR councils. Specifically, VA was largely in line with its EHR councils’ charter goals to include about 60 percent of council members from the field, with the remainder from the central office, and to have representatives from a range of geographic locations and with sufficient experience and expertise: VA data show that EHR councils had about 58 percent (607 of 1,039) of its members representing the field and about 40 percent (415 of 1,039) representing VA’s central office, roughly in line with VA’s goals. The councils included participants from a variety of geographic regions, including each of its 18 VISNs, with the most participants representing VISN 20, which oversees the two medical facilities where the new EHR system is scheduled to be initially implemented. Participants primarily represented the most complex level of VA medical facilities. Specifically, VA data show that about 83 percent (861 of 1,039) of participants represented level 1 VA medical facilities, whereas about 3 percent (33 of 1,039) and 7 percent (75 of 1,039) represented medium (level 2) and low (level 3) complexity VA medical facilities, respectively. EHRM program officials said that the majority of participants represented higher-complexity facilities because participants were drawn from national experts and published authors, and often performed VA-specific processes. Furthermore, smaller medical centers had fewer resources so clinicians were more likely to be needed to continue providing patient care at those facilities and less likely to be available to serve on councils. According to a voluntary questionnaire VA asked council participants to complete, about 37 percent of the 304 participants who completed the survey had at least 6 years of experience at VA; 29 percent had at least 16 years of experience; and, 19 percent had more than 25 years of experience. In addition to participants from the VA, we observed that EHR council national workshop meetings included participants from outside of the department—such as clinicians from DOD sites and commercial health care systems that had already implemented Cerner’s EHR system. These participants provided support for discussions and insight into industry best practices. While the EHR councils included a wide range of participants, in September and October 2019, council participants from both of the initial operating capability sites raised concerns that the councils did not include adequate representation from specialty areas at national workshop meetings. Specifically, these officials said that an insufficient number of specialty physicians, including pulmonologists and gastroenterologists, were included. In addition, VA’s summary from the last workshop, national workshop 8, observed that additional subject matter experts representing medical specialties should be included in the EHR system configuration decision process to enhance collaboration and decision- making. EHRM program officials, including the Chief Medical Officer and Ambulatory Council chairs, said they had not included certain specialists and scheduled workshops on specialty areas, such as pulmonology and gastroenterology as they decided to focus first on more foundational decisions, such as those for primary care. Starting in November 2019, following the completion of the eight national workshops, VA EHR councils continued to meet, as necessary, to complete capability set 1 and 2 configuration decisions and had begun to include clinicians from specialty areas in these meetings. VA plans to continue these meetings through September 2020. VA’s approach of including clinicians from specialty areas in ongoing configuration decision meetings is generally consistent with our leading collaboration practice that agencies should ensure that all relevant participants be included in any collaborative effort they undertake. By including relevant participants, the program increases the likelihood that it has considered input from participants with unique knowledge, skills, and abilities. Further, including relevant participants increases the likelihood that when implemented, the EHR system will be properly configured to meet the needs of clinicians, and effectively support their efforts to deliver care. VA’s EHRM Program Did Not Always Include Key Stakeholders at Its Local Workshops Local workshops at the Mann-Grandstaff VA Medical Center and VA Puget Sound Health Care System did not always include representation from relevant stakeholders, including facility clinicians and staff. Specifically, multiple participants in the local workshop meetings, including clinicians and department leads, at these facilities said that VA’s EHRM program did not always effectively communicate information about local workshop meetings to facility clinicians and staff to facilitate the designation of staff to participate and ensure relevant representation at local workshops. Local workshop participants stated that they did not always know which local workshop meetings they needed to attend, because they did not receive adequate information about the session topics. This is inconsistent with key collaboration practices identified in our prior work to ensure that relevant participants be included in any collaborative effort and that participating entities have agreed on common terminology. Furthermore, standards for internal control in the federal government call for effective communication and information sharing. Local workshop participants, including clinicians and department leads from medical facilities said that differences in the use of terminology between VA and Cerner sometimes made it challenging to identify the clinicians and staff that should attend local workshop meetings. For example, some officials reported that they did not believe that a meeting on “charge services” would be relevant to their work given that VA does not typically bill veterans for services. However, they later learned that the meeting actually covered topics beyond billing, such as capturing workload data that was relevant to their work. Because Cerner and VA did not always effectively communicate regarding workshop content for local workshops, local workshops did not always include all relevant stakeholders. As previously stated, VA plans to hold local workshops in advance of the Cerner EHR system implementation at future VA medical facilities. However, VA has not indicated how it will improve the ways in which it describes the topics of these workshops, including providing sufficient detail and defining key terms. If VA improves communication on workshop meeting topics, the EHRM program can increase the likelihood that it will obtain appropriate input from facility clinicians and staff at local workshops to consider in design decisions for the implementation of the EHR system. Conclusions VA met its schedule for making the needed system configuration decisions that would enable the department to implement its new EHR system at the first VA medical facility, which was planned for July 2020. In addition, VA has formulated a schedule for making the remaining EHR system configuration decisions before implementing the system at additional facilities planned for fall 2020. VA’s EHRM program was generally effective in establishing decision- making procedures that were consistent with applicable federal standards for internal control. However, VA did not always ensure the involvement of relevant stakeholders, including medical facility clinicians and staff, in the system configuration decisions. Specifically, VA did not always clarify terminology and include adequate detail in descriptions of local workshop sessions to medical facility clinicians and staff to ensure relevant representation at local workshop meetings. Participation of such stakeholders is critical to ensuring that the EHR system is configured to meet the needs of clinicians and support the delivery of clinical care. Recommendation for Executive Action We are making the following recommendation to VA: For implementation of the EHR system at future VA medical facilities, we recommend that the Secretary of VA direct the EHRM Executive Director to clarify terminology and include adequate detail in descriptions of local workshop sessions to facilitate the participation of all relevant stakeholders including medical facility clinicians and staff. (Recommendation 1) Agency Comments We provided a draft of this report to VA and DOD for comment. In its comments, reproduced in appendix II, VA concurred with our recommendation and described steps that it planned to take to address it. Specifically, VA noted that it planned and designed its workshops to enable collaboration between clinical and administrative experts and end- users so that the EHR system is designed, validated, and configured to promote interoperability and quality care for veterans. VA stated that it is further refining local workshop agendas and descriptions to facilitate VA subject matter expert identification and participation. VA also provided technical comments on the report, which we incorporated as appropriate. DOD provided technical comments on the report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of VA and DOD, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Debra A. Draper at (202) 512-7114 or DraperD@gao.gov or Carol C. Harris at (202) 512-4456 or HarrisCC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Status of Electronic Health Record System Configuration Decisions, as of November 2019 and March 2020 Data collection workbooks. All EHR councils completed at least 80 percent of expected data collection workbooks. Specifically, by November 2019, three of the 18 councils completed 100 percent of them and by March 2020, each of the councils had completed 100 percent of their data collection workbooks. Table 3 shows the number of data collection workbooks completed in comparison to the total expected for each of the 18 EHR councils based on data from November 13, 2019 and March 26, 2020. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individuals named above, Mark Bird (Assistant Director), Michael Zose (Assistant Director), Merry Woo (Analyst-in-Charge), Bianca Eugene, and Paris Hawkins made key contributions to this report. Also contributing were Jennie F. Apter, Giselle Hicks, Monica Perez- Nelson, and Ethiene Salgado-Rodriguez.
Why GAO Did This Study VA's existing EHR system is antiquated, costly to maintain, and does not fully support VA's need to exchange health records with other organizations, such as the Department of Defense. As a result, VA has undertaken a modernization effort to replace it. As VA prepares to transition from its existing EHR system to a commercial system, it has the opportunity to design standardized work processes to support the delivery of care and ensure information on veterans' care is consistently captured, regardless of site of care. GAO was asked to review VA's EHR system configuration process. This report examines, among other objectives: (1) how VA made EHR system configuration decisions and assessed the compatibility of the commercial EHR system with its work processes; and (2) the effectiveness of VA's decision-making procedures, including ensuring key stakeholder involvement. GAO observed national and local workshop meetings; visited planned initial implementation sites; reviewed documentation on the processes and schedule; and interviewed VA, DOD, and contractor officials. What GAO Found The Department of Veterans Affairs (VA) used a multi-step process to help ensure that its future commercial electronic health record (EHR) system is configured appropriately for, and is compatible with, its clinical work processes. To configure the EHR system, which VA planned to implement initially at the Mann-Grandstaff VA Medical Center, in Spokane, Washington, in July 2020, and at the Puget Sound Health Care System in the fall of 2020, VA established 18 EHR councils comprising VA clinicians, staff, and other experts in various clinical areas and held eight national workshops between November 2018 and October 2019. At these workshops, the councils decided how to design the functionality of the EHR software to help clinicians and other staff deliver care and complete tasks such as administering medication. VA also held eight local workshops at both medical centers to help ensure that the EHR configuration supported local practices. As of March 2020, the EHR councils were continuing to meet to complete configuration decisions. Furthermore, VA plans to hold local workshops in advance of the EHR system implementation at future VA medical facilities. In April 2020, the VA Secretary announced that the department had shifted priorities to focus on caring for veterans in response to the pandemic created by COVID-19. According to program officials, at that time, they paused the implementation of the EHR system and were assessing the impact of the COVID-19 pandemic on VA's planned implementation schedule. GAO found that VA's decision-making procedures were generally effective as demonstrated by adherence to applicable federal internal control standards for establishing structure, responsibility, and authority, and communicating internally and externally, but that VA did not always ensure key stakeholder involvement. Specifically, the councils included a wide range of stakeholders from various geographic regions. However, according to clinicians from the two initial medical facilities for implementation, VA did not always effectively communicate information to stakeholders, including medical facility clinicians and staff to ensure relevant representation at local workshop meetings. As a result, local workshops did not always include all relevant stakeholders. VA has not indicated how it plans to describe these future sessions and define key terms to ensure key stakeholder participation in local workshops. By ensuring that all relevant stakeholders are included, VA will increase the likelihood that it is obtaining input from a wide range of clinicians and staff who will use the EHR system and will increase the likelihood that when it is implemented, the EHR system will effectively support the delivery of care at VA medical centers. What GAO Recommends GAO is recommending that VA ensure the involvement of all relevant medical facility stakeholders in the EHR system configuration decision process. VA concurred with GAO's recommendation.
gao_GAO-19-252
gao_GAO-19-252_0
Background Overview of the FHLBanks The FHLBank System comprises 11 federally chartered banks. The FHLBanks represent 11 districts and are headquartered in Atlanta, Boston, Chicago, Cincinnati, Dallas, Des Moines, Indianapolis, New York City, Pittsburgh, San Francisco, and Topeka (see fig. 1). Each FHLBank is cooperatively owned by its members––such as commercial and community banks, thrifts, credit unions, and insurance companies. As of December 31, 2017, the number of member institutions in each district varied widely, as did the total amount of assets each FHLBank held (see table 1). FHLBank Board of Directors Each FHLBank has a board of directors made up of member directors and independent directors. As shown in figure 2, the Federal Home Loan Bank Act (as amended by HERA) and its regulations set forth a number of requirements for FHLBank board directors. As of October 2018, each FHLBank board had 14–24 directors, for a total of 194 directors (see table 2). Of the 194, 108 were member directors and 86 were independent directors, including 24 public interest directors. Each board elects a chair and vice chair who serve 2-year terms. As of October 2018, of the 11 board chairs, six were member directors and five were independent directors, including two public interest directors (see table 3). Each FHLBank has a president who reports to the bank’s board of directors, but no representatives from bank management may serve on the boards. FHFA’s Diversity-Related Requirements and Oversight of FHLBanks To implement requirements in HERA, in December 2010 FHFA issued the Minority and Women Inclusion rule to set forth minimum requirements for FHLBank diversity programs and reporting. Among other things, the 2010 rule required each bank to create its own Office of Minority and Women Inclusion (OMWI) or designate an office to perform duties related to the bank’s diversity efforts, and establish policies related to diversity and inclusion, including policies on nominating board directors. The 2010 rule also requires FHLBanks to submit an annual report to FHFA on their diversity efforts. FHFA also evaluates the quality of corporate governance by board directors as part of its on-site annual examinations and off-site monitoring of FHLBanks. For example, FHFA’s examination includes reviewing the bank boards’ responsibilities, board and committee meeting minutes, and the boards’ oversight of the banks’ operations and corporate culture. Our Previous Work on Diversity Our previous work on diversity includes reports on Federal Reserve Banks’ board diversity, FHLBank board governance, women on corporate boards, and diversity in the financial services sector. In 2011, we found limited diversity among the boards of the 12 Federal Reserve Banks. We recommended that the Board of Governors of the Federal Reserve System encourage all Reserve Banks to consider ways to help enhance the economic and demographic diversity of perspectives on boards, including by broadening potential candidate pools. The recommendation was implemented in December 2011. In a 2015 report on FHLBank board governance, we found that FHFA and FHLBanks had taken steps to increase board diversity, including creating regulations that encouraged the banks to consider diversity in board candidate selection and developing processes to identify and nominate independent directors. In a 2015 report on women on corporate boards, we found that while the share of women on boards of U.S. publicly traded companies had increased, reaching complete gender balance could take many years. We identified factors that might hinder women’s increased representation on boards, including boards not prioritizing recruiting diverse candidates and low turnover of board seats. In addition, in 2017 we reported that representation of women and minorities at the management level in the financial services sector showed marginal or no increase during 2007–2015. FHFA Has Taken Steps Since 2015 to Encourage Board Diversity at FHLBanks Since our 2015 report on FHLBank board governance, FHFA has taken additional actions to encourage diversity on FHLBank boards, including adding a requirement for the banks to report board demographics, clarifying expectations for board elections outreach, requesting the creation of a system-wide board diversity task force, and allowing some banks to add an independent director. FHFA has a limited role in overseeing FHLBanks’ board diversity, according to FHFA staff, because that is not part of the agency’s statutory responsibilities. While FHFA reviews the list of independent director nominees for FHLBank boards to ensure that the nominees meet all eligibility and qualification requirements, board directors are not FHLBank employees. Rather, they form the oversight body of each bank. In contrast, FHFA has a larger role in monitoring diversity efforts related to the workforce and suppliers of the banks. For example, the agency’s annual examination manual contains a section that covers such efforts. FHFA oversight of diversity efforts also includes reviewing the FHLBanks’ annual reports on diversity efforts, which the banks are required to submit under HERA. In adopting its Minority and Women Inclusion rule of 2010 to implement this requirement, FHFA stated that it would analyze and include information from the banks’ annual reports in the agency’s own annual report to Congress. The banks’ annual reports initially included data related to their workforce and supplier diversity efforts. In May 2015, FHFA amended the 2010 rule and added two reporting requirements for the annual reports: (1) data on gender and race/ethnicity of board directors (which the directors would voluntarily self-report), and (2) information on the banks’ outreach efforts (such as to promote diversity when nominating and soliciting director candidates). FHFA stated in its 2015 amendments that it intended to use the director data to analyze future trends in board diversity and the effectiveness of each bank’s policies and procedures to encourage board diversity. FHFA also clarified expectations on FHLBank diversity efforts in a 2016 amendment to its regulation related to bank board directors as well as in guidance and communications to FHLBanks. Clarifying scope of election outreach activities. According to FHFA staff, FHLBanks had inquired if the existing regulation would prohibit the banks from conducting outreach to or recruiting of diverse board candidates in the nomination or solicitation process. FHFA regulation restricts FHLBanks from advocating for a particular member director candidate or influencing the board election for member and independent directors. According to FHFA staff, to address these concerns, the agency amended the regulation in 2016 to clarify that the banks may conduct outreach to find diverse board director candidates. FHFA staff added that the regulation amendment also made clear that the banks may fulfill the regulatory requirement to encourage consideration of diversity in nominating or soliciting candidates for board director positions without violating restrictions on advocating for particular director candidates. Guidance. FHFA provided FHLBanks with guidance related to diversity, including board diversity. For example, the agency provided guidance on the roles and duties of the banks’ OMWI officers and the scope of diversity regulations. FHFA provided the banks a template to report newly required data on the gender and race/ethnicity of board directors. To help banks prepare their annual reports, in June 2018 FHFA also developed an annual report template that outlines and describes the contents of the required reporting elements. The template includes sections for individual FHLBanks to present data on board composition by diversity categories and to describe past and future outreach activities and strategies to promote board diversity and outcomes from the bank’s activities. Communications. FHFA has communicated guidance and discussed board diversity issues with FHLBank boards and with staff involved in the banks’ board diversity efforts. For example, FHFA staff gave presentations at meetings during which FHLBank board directors shared information on board diversity efforts. The staff noted FHFA’s OMWI director generally attends the semi-annual conferences of the banks’ OMWI officers, during which she discusses diversity issues such as the roles and responsibilities of these officers and the scope of the FHFA regulations. Furthermore, FHFA OMWI and other offices developed and implemented some strategies to help FHLBanks maintain or increase board diversity. In 2016, FHFA OMWI staff met with FHLBanks and requested that the banks create a Bank Presidents Conference Board Diversity Task Force to share practices to promote board diversity. The staff said that they act as facilitators and informal advisors and may provide technical assistance to the system-wide task force—for example, by developing a list of practices related to board diversity. Also, as encouraged by FHFA, starting in 2017, each bank has a representative (a board director or the bank president) on the task force. Also, based on FHFA’s 2016 annual FHLBank board analysis, the FHFA Director approved requests from three FHLBanks to add an independent director seat for their 2017 boards to help maintain or increase board diversity. FHFA extended the offer to the other banks (except Des Moines, as its board was undergoing restructuring after the merger with Seattle). FHFA staff said in preparation for their 2017 FHLBank board analysis, they informally monitored the gender and minority status of the additional independent director seats filled by the seven banks that accepted the offer. Six of the seats were filled by women (of whom two were minorities) and one seat was filled by a minority male, according to FHFA staff. FHFA staff also told us the FHFA Director has some discretion on the number of director seats based on an individual bank’s circumstances, including the request to maintain diversity. For example, in 2018, one FHLBank requested to retain its female board vice chair to help preserve diversity and institutional knowledge on its board. FHFA granted the bank’s request to keep the director for another year. FHFA staff told us that FHFA has considered issuing guidance in two areas, but that these areas do not represent immediate priorities for their diversity efforts. Specifically, FHFA OMWI staff stated that the office intended to develop an examination module on board diversity, but this is not the office’s high priority for 2019. As previously noted, FHFA’s current examination manual includes a section that covers FHLBanks’ workforce and supplier diversity efforts. But, the manual does not consider board diversity-related issues in as much detail as the supplier and workforce section. For example, it covers FHFA’s review of the quality of corporate governance by board directors and mentions the consideration of diversity for potential board director candidates. Also, the 2015 rule amendments noted that the agency intended to develop guidance to further elaborate on its expectations related to outreach activities and strategies for the banks’ board directors. FHFA staff told us that they would like to focus on ongoing diversity efforts and gather more information before starting new efforts. FHLBank Boards Increased Share of Female Directors Since 2015, but Trends for Minority Directors Were Less Clear Share of Female Board Directors Increased from 2015 to October 2018, and Varied by FHLBank At the overall FHLBank board level, the share of female directors increased from 18 percent (34 directors) in 2015 to 23 percent (44 directors) in October 2018 (see fig. 3). This represented a continuation of an upward trend. For example, we previously reported a 16 percent share (31 female directors) in 2014. Each FHLBank had at least two female board directors in October 2018, but some boards had higher shares of female directors than others. As shown in figure 4, four banks—Chicago, Des Moines, Dallas, and Pittsburgh—had four or more female board directors (representing 22–38 percent of the boards). In comparison, seven banks had two or three female directors (representing 14–20 percent). Additionally, FHLBanks varied in how many female directors were added from 2015 to October 2018—one bank added two, six each added one, and four added none. For additional information on the number of board directors by bank and by gender from 2015 to October 2018, see appendix II. Women have some representation in board leadership positions. In October 2018, two FHLBanks—Des Moines and Pittsburgh—had female vice chairs of their respective board. Another bank (San Francisco) had a female vice chair of its board in 2016 and 2017. In 2015, we reported that one bank (Atlanta) had a female board chair. Additionally, each bank’s board has committees (such as the Audit Committee and the Risk Management Committee) with committee chairs and vice chairs. Ten of the 11 banks had board committees with at least one female chair or vice chair in October 2018. The share of women who chaired board committees was the same as the share of women on the overall FHLBanks boards in October 2018—23 percent. We compared female representation on FHLBank boards to that of other corporate boards and that of senior management in the financial services sector. Women constituted 23 percent of FHLBank boards in October 2018 and 22 percent of boards of the companies in the Standard and Poor’s 500 in 2017, as reported by Institutional Shareholder Services. Our analysis of the most recently available EEOC data found that the share of women in senior management positions in the financial services industry in 2016 was 29 percent. The share of women on FHLBank boards was 19 percent in the same year. Senior management in the financial services sector represents a pool of comparable candidates that could provide directors for FHLBank boards. FHLBank Data Showed the Share of Minority Directors Increased Since 2015, but Data Are Incomplete The share of directors who self-identified as racial/ethnic minorities increased from 2015 to 2017, but the size of the increase is unclear due to the number of directors who did not report this information. Board directors voluntarily submit demographic information, including race/ethnicity. Some directors might have chosen not to self-identify their race/ethnicity. Reported Data Showed Increases in Minority Directors At the overall FHLBank board level, the share of directors who self- identified as racial/ethnic minorities increased from 2015 to 2017 (see fig. 5). Eleven percent (20 directors) of FHLBank board directors self- identified as racial/ethnic minorities in 2015 and 15 percent (30 directors) in 2017. Four percent (7 directors) did not self-identify in 2015 and 8 percent (15 directors) in 2017. The increase in the number of directors who identified as racial/ethnic minorities shows an upward trend from 10 percent (19 directors) in 2014, as we reported in 2015. The number of directors who self-identified as racial/ethnic minorities varied by bank. As shown in figure 6, all 11 FHLBanks had at least one minority director on the board in 2017, and six banks had three or more minority directors. Ten of the 11 banks each added one minority director during 2015–2017. For additional information on the number of board directors by bank and by race/ethnicity in 2015–2017, see appendix II. More specifically, as seen in table 4, in 2017, 9 percent (18 directors) identified as African-American, 4 percent (8 directors) identified as Hispanic, 2 percent (3 directors) identified as Asian, and 1 percent (1 director) identified as “other.” Racial/ethnic minorities have limited representation in board leadership positions. As of October 2018, one FHLBank had a vice chair of its board who identified as a minority. In 2017, another bank had one vice chair of its board who identified as a minority. We compared the FHLBank boards’ share of racial/ethnic minorities to those of corporate boards and senior management in the financial services sector. In 2017, 15 percent of the FHLBank board directors identified as racial/ethnic minorities, as previously noted. This compares to 14 percent on boards of directors of companies in the Standard and Poor’s 500 in 2017, according to Institutional Shareholder Services, and 12 percent in senior management of the financial services industry in 2016, based on our analysis of EEOC data. In 2016, the share of minority directors on FHLBank boards was 13 percent. Varying Collection Processes May Contribute to Data Gaps Board demographic data collection processes vary by FHLBank, which may contribute to the differences in the number of directors who did not self-identify their gender, race/ethnicity, or both. FHFA has not reviewed the banks’ varying processes to determine whether some processes were more effective, such as whether the practices allowed banks to more effectively identify and follow up with directors who may have forgotten to respond. All directors at three banks self-reported their gender and race/ethnicity in 2015–2017, but some directors at the other eight banks did not self-identify this information. However, we could not determine whether those directors deliberately chose not to self-report this information or inadvertently did not respond to the data collection forms or questions. As allowed by FHFA regulation, FHLBanks varied in the data collection forms they used, questions they asked, and methods they used to distribute forms to board directors to obtain self-reported gender and race/ethnicity information. For example, the three banks with complete data from all directors each used different data collection forms. One bank collected gender and race/ethnicity as a voluntary section of its annual board director skills assessment, which was filled out by each director. Two banks distributed a separate data collection form at a board director meeting or through an online survey, which might have included a mechanism for tracking which directors had not responded to the survey. The other eight banks, which had incomplete demographic data, also used varying data collection processes. Of these, four banks distributed their data collection forms during a board meeting or through an e-mail, and the other four banks used online surveys. Of the 11 banks, six included an option on their forms to mark “opt not to self-identify,” while five included similar language as part of the form indicating that completing the form is voluntary. Although some banks had similar approaches to data collection, such as using an online survey, it is unclear whether certain approaches helped some banks to obtain more complete data despite directors’ right to opt out of self-reporting demographic information. FHFA has implemented some efforts on improving the quality of the data FHLBanks report to the agency, but FHFA staff told us that such efforts have not included a review of how the banks collect board director demographic data. For example, FHFA created templates to help banks report board data and board-related content, and its data reporting manual focused on reporting data related to the banks’ workforce, supplier base, and financial transactions. However, none of these documents discussed processes for collecting board director demographic data. According to FHLBank staff, FHFA’s instructions on board director data collection are limited to what is stated in the regulation. That is, banks should collect data on their board directors’ gender and race/ethnicity using EEOC categories, and such data should be voluntarily provided by the directors without personally identifiable information. FHFA’s 2015 regulation amendments require FHLBanks to compare the board demographic data with prior year’s data and provide a narrative of the analysis. FHFA also stated in the amendments that it intended to use the director data to establish a baseline to analyze future trends of board diversity. Additionally, federal internal control standards state that agency management should use quality information to achieve their objectives. Quality information would include complete and accurate information that management can use to make informed decisions in achieving key objectives. By obtaining a better understanding of the different processes FHLBanks use to collect board demographic data, FHFA and the banks could better determine which processes or practices could contribute to more complete data. For example, there may be practices that could help banks more effectively follow up with directors who might have missed the data collection forms or questions. More complete board demographic data could help FHFA and the banks more effectively analyze data trends over time and demonstrate the banks’ efforts to maintain or increase board diversity. FHLBanks Report Some Challenges, but Have Taken Steps to Increase Their Board Diversity FHLBanks report some challenges that may slow or limit their efforts to increase board diversity, which include low levels of diversity in the financial sector; member institutions not prioritizing diversity; balancing the need for diversity with retaining institutional knowledge; and competition for women and minority candidates. Despite these challenges, the banks have taken several steps to help increase board diversity. FHLBank Boards Report Some Ongoing Challenges in Their Efforts to Increase Diversity, Especially among Member Directors According to FHLBank representatives, including board directors, the FHLBank boards face challenges that may slow or limit their efforts to increase diversity, including the following: Low levels of diversity in the financial sector. Twelve representatives from nine FHLBanks told us that the pool of eligible women and minority board candidates is small in the banking and financial sector. For example, five representatives emphasized that the majority of member institutions have chief executive officers (CEO) who are white males. In particular, one director told us that out of the hundreds of member institutions affiliated with his FHLBank, he knew of only six female CEOs. Directors representing five banks also noted that the pool of eligible, diverse candidates in senior management positions in the financial services sector can be even smaller in certain geographic areas. As a result, it can be particularly challenging for some banks to fill member director seats because, by statute, candidates for a given FHLBank board must come from member institutions in the geographic area that the board seat represents. For example, one director said that the pool of such candidates is especially small in rural areas. In 2015, FHFA told us that the overall low levels of diversity in the financial services sector, including at FHLBank member institutions, increased the challenges for improving board diversity. However, representatives of corporate governance organizations with whom we spoke told us that the financial services sector does not face unique challenges. Representatives also said that qualified women and minority candidates are present in the marketplace. Our analysis of 2016 EEOC data found that the representation of women in senior management in the financial services sector was within 1 percentage point of the share of women in senior management in the private sector overall, and minority representation was within 4 percentage points. Member institutions may not always prioritize diversity in director elections. As previously discussed, member institutions nominate member director candidates and vote for the member director and independent director candidates. Ten representatives from eight FHLBanks stated that member institutions may prioritize other considerations over diversity when they nominate and vote on board candidates, such as name recognition or a preference for candidates who are CEOs. One director told us that the member banks may not be as interested in diversity as the FHLBanks. Another director emphasized that FHLBanks are trying to change attitudes and embed diversity in the member institutions’ operations. He characterized this process as a marathon, not a sprint. Board directors with whom we spoke also stressed that FHFA regulations do not allow the FHLBank boards to exert influence over how member institutions vote. Board directors can emphasize the importance of diversity to member institutions but cannot in their official capacity campaign for specific candidates. Balancing the need for diversity with retaining institutional knowledge. Directors from five banks told us that they aim to balance bringing in new women or minority directors with retaining the valuable institutional knowledge of incumbent directors. One director added that new board directors face a steep learning curve. Thus, the directors at some banks will recruit new directors only after allowing incumbent directors to reach their maximum number of terms (which could translate to several years). As we reported in 2015, FHFA staff acknowledged that low turnover, term lengths, and the need to balance diversity with required skills posed challenges to the FHLBank board diversity. In our 2016 report on women on corporate boards, relevant stakeholders acknowledged this as a challenge because directors with longer tenure possess knowledge about a company that newer directors cannot be expected to possess. Competition for women and minority candidates. Board directors from five FHLBanks told us that they face competition as they seek to recruit women and minority candidates. For example, a director from one bank told us that his board encouraged a potential female candidate to run for a director seat. However, the candidate felt she could not accept the opportunity because of her existing responsibilities on the boards of two publicly traded companies. While these challenges can apply to member and independent directors, representatives from all 11 FHLBanks emphasized that it can be particularly challenging to find and elect female or minority member directors. Our analysis of FHLBank board director data confirmed that across 11 FHLBank boards, female representation was lower among member directors (13 directors or 12 percent) than independent directors (31 directors or 36 percent) in October 2018. FHFA stated in this review and in 2015 that they are aware of the potential difficulty of identifying diverse candidates for member directors and that greater board diversity likely would be achieved with independent directors. FHLBanks Developed Practices and Strategies to Help Increase Board Diversity Since 2015, FHLBanks have taken actions to help increase board diversity, including developing and implementing practices and strategies that target board diversity in general and member directors specifically. As previously discussed, at the request of FHFA, the banks established the Bank Presidents Conference Board Diversity Task Force. The purpose of the task force is to develop recommendations for advancing board diversity and to enhance collaboration and information sharing across FHLBank boards. Each bank is represented by a board director or the bank president. Representatives meet regularly to discuss challenges, recommend practices, and receive training. One task force representative told us that her participation on the taskforce has helped demonstrate to her board and bank that diversity matters. Others mentioned that the ability to share practices and learn from other banks was a great benefit. As part of its work, the task force developed a list of practices that FHLBanks have used or could use to improve board diversity (see text box). According to bank staff, the list was approved by the presidents of each bank and distributed to bank staff. The practices can be generally summarized into three categories—emphasizing the importance of diversity; assessing skills diversity; and seeking new ways to find candidates—which are generally similar to the commonly cited practices for improving board diversity we identified in 2015. Summary of Practices Developed by Bank Presidents Conference Board Diversity Task Force of the Federal Home Loan Banks Include references to diversity on the bank website, in appropriate publications, in presentations about the bank, and particularly in all election materials. Educate current board members on the business case for diversity. Educate member institutions on the business case for diversity through member meetings, newsletter articles, etc. to help develop a more diverse member base and help groom new leaders. Perform a skills assessment of current board skills and areas of expertise and determine skill sets and expertise needed. Review the term limits of current directors and determine the possible loss of continuity if multiple incumbent directors leave the board in a short period of time. Build a pool of diverse member and independent candidates. Conduct outreach to regional and national business organizations, such as trade associations, women and minority business groups, and professional organizations, to ask for referrals of possible candidates and form relationships prior to a board election. Seek an additional independent board seat from the Federal Housing Finance Agency. Example of Diversity Statement in an Election Announcement for a Federal Home Loan Bank The Federal Home Loan Bank of New York (FHLBNY) included the following statement in its 2017 director election announcement package: “The FHLBNY’s Board of Directors consists of a talented group of dedicated individuals that benefits from, among other things, demographic (including gender and racial) diversity, and we expect that this will continue in the future. As you consider potential nominations for Member Directorships and give thought to persons who might be interested in Independent Directorships, please keep diversity in mind. Your participation in this year’s Director Election process is greatly appreciated, and will help continue to keep the Board and the FHLBNY diverse and strong.” Emphasizing the importance of diversity. All 11 FHLBanks included statements in their 2017 election announcements that encouraged voting member institutions to consider diversity during the board election process. Six banks expressly addressed gender, racial, and ethnic diversity in their announcements. One female director with whom we spoke said that she was encouraged to run for a board seat after reading an election announcement in 2013 that specifically called for candidates with diverse backgrounds. All 11 FHLBanks also referenced their commitment to diversity on their websites, including posting diversity and inclusion policies, describing diversity missions, or including board statements on diversity. Directors we interviewed from all 11 FHLBanks told us that their bank conducted or planned to conduct diversity training for board directors. The training sessions covered topics such as the business case for diversity and unconscious bias. Additionally, board directors from two banks discussed efforts to encourage member institutions to increase diversity, such as holding a panel on the importance of diversity at the annual member conference. In 2015, we found that demonstrating a commitment to diversity in ways similar to these is a first step towards addressing diversity in an organization. Assessing skills diversity. Nine FHLBanks performed board skills assessments annually or biennially. These assessments asked directors to evaluate their knowledge of specific topic areas. FHFA regulation allows each bank to annually conduct a skills and experience assessment and, if applicable, inform members before elections of particular qualifications that could benefit the board. In 2015, we found that conducting a skills assessment was a commonly cited practice for boards seeking to increase representation of women and minorities. The other two FHLBanks conducted board self-assessments annually, focused on board effectiveness and organization, but did not evaluate the skills of their individual directors. All 11 FHLBanks also reported regularly reviewing the remaining terms of current directors to determine the possible loss of continuity. Seeking new ways to find candidates. Representatives from 10 FHLBanks noted that their banks maintain a pool of diverse director candidates for future open positions. FHLBanks described using various methods to build these pools. All 11 banks described outreach to trade organizations, industry groups, universities, and nonprofit organizations when looking to identify women and minority candidates. For example, FHLBank of Pittsburgh identified 15 organizations in its district that actively promote diversity and the inclusion of women and minorities in business to specifically target in 2017. Directors from seven banks also reported hiring a search firm or consultant to help them identify women and minority candidates. These activities are consistent with commonly cited practices described in our 2015 work that boards can use to reach out beyond the typical pool of applicants. As previously mentioned, seven FHLBanks requested or were offered an additional independent director seat by FHFA. According to FHFA staff, four of the seats were filled by white females, two were filled by minority females, and one was filled by a minority male. Example of a Diversity Practice Focused on Member Directors In 2017, the Federal Home Loan Bank of San Francisco developed a Member Director Diversity Outreach Plan. The plan included eight steps that provide timelines and specific assignments for directors and bank management. For example, steps include conducting early outreach to trade organizations where women and minority directors might participate, individual director outreach to potential candidates, and developing a list of prospective candidates in case of vacancy appointments. Following the implementation of this plan, member institutions elected one female director and one minority director to fill the vacant member director seats. Fill interim seats with women and minority candidates. FHLBanks can appoint women or minority candidates to fill interim member director seats. By regulation, when a director leaves the board in mid- term, the remaining board directors may elect a new director for the remaining portion of the term. For example, the FHLBank of Pittsburgh reported electing a minority director in 2017 to fill a vacant member director seat. One director told us that when a female or minority director is elected for an interim term, the election increases the likelihood of the director being elected by the member institutions for a following full term. Conduct mentoring and outreach. FHLBank board directors also can use their personal networks to conduct outreach and mentor potential candidates. Current directors can pledge to identify and encourage potential women and minority candidates to run for the board. For example, one director told us that his board emphasizes the need for directors to pay attention to potential women and minority candidates they meet. This director said he had personally contacted qualified potential candidates and asked them to run. Another director noted that women and minority directors are likely to know other qualified candidates with diverse backgrounds. These directors can identify and refer individuals in their networks. Another director emphasized the importance of member directors conducting outreach to member institutions. Member directors have the most interaction with the leadership of member institutions and can engage and educate them on the importance of nominating and electing diverse member directors. Look beyond CEOs. Additionally, FHLBanks can search for women and minority candidates by looking beyond member bank CEOs. By regulation, member directors can be any officer or director of a member institution, but there is a tendency to favor CEOs for board positions, according to board directors, representatives of corporate governance organizations, and academic researchers with whom we spoke. The likelihood of identifying a woman or minority candidate increases when member institutions look beyond CEOs to other officers, such as chief financial officers or board directors. For example, the FHLBank of Des Moines expanded its outreach to women and minority candidates to include board directors at member institutions. In 2017, a female director who is a board member of her member institution was elected. Conclusions The Housing and Economic Recovery Act of 2008 emphasized the importance of diversity at the FHLBank System, and FHFA and FHLBanks have undertaken efforts to encourage diversity at the banks’ boards. In particular, FHFA plans to use data it collects on the gender and race/ethnicity of board directors as a baseline to analyze trends in board diversity. While FHFA regulation allows directors to choose not to report this information, the banks’ varying data collection processes did not always allow banks to accurately account for missing information (as in the case of directors forgetting to respond to the data questions or fill out forms). Reviewing the processes the banks use to collect the demographic data could help FHFA and the banks identify practices to produce data that would better allow FHFA to track trends in board diversity. FHFA could work with FHLBanks (potentially through the system-wide Board Diversity Task Force) to conduct such a review. Recommendation for Executive Action The Director of FHFA’s Office of Minority and Women Inclusion, in consultation with FHLBanks, should conduct a review on each bank’s processes for collecting gender and race/ethnicity data from boards of directors and communicate effective practices to FHLBanks. (Recommendation 1) Agency Comments We provided a draft of this report to FHFA and each of the 11 FHLBanks for review and comment. In its comments, reproduced in appendix III, FHFA agreed with our recommendation. FHFA commented that it intends to engage with FHLBanks’ leadership in 2019 to discuss board data collection issue and address our recommendation. FHFA also stated that it plans to request that the Board Diversity Task Force explore the feasibility and practicability for FHLBanks to adopt processes that can lead to more complete data on board director demographics. In addition, four FHLBanks provided technical comments, which we incorporated as appropriate. The other seven FHLBanks did not have any comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Director of FHFA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or ortiza@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines the (1) extent to which the Federal Housing Finance Agency (FHFA) has taken steps to encourage board diversity at the Federal Home Loan Banks (FHLBank); (2) trends in diversity composition (gender, race, and ethnicity) for the boards of individual FHLBanks; and (3) challenges FHLBanks face and practices they use in recruiting and maintaining a diverse board. While diversity has many dimensions, this report focuses on gender, race, and ethnicity. To understand the steps FHFA has taken to encourage FHLBank board diversity, we reviewed relevant laws and regulations related to FHLBank boards, including FHFA regulations on director elections and diversity reporting requirements. For example, we reviewed the relevant sections in the Housing and Economic Recovery Act of 2008 pertaining to FHFA and the banks and FHFA’s 2010 Minority and Women Inclusion rule and its 2015 amendments. We also reviewed other FHFA and bank documentation related to board director elections and diversity considerations. For example, we reviewed FHFA’s annual board director analysis for 2016–2018 to identify actions the agency took to help maintain or increase the number of female or minority directors at the FHLBank boards. Additionally, we interviewed FHFA staff to understand the agency’s role in overseeing FHLBank board diversity and the agency’s efforts in helping the banks maintain or increase board diversity. To describe trends in FHLBank board diversity, we analyzed gender and race/ethnicity data self-reported by board directors in FHLBanks’ annual reports to FHFA as of the end of 2015, 2016, and 2017. The banks’ annual reports use the gender and race/ethnicity classifications from the Employer Information Report (EEO-1) of the Equal Employment Opportunity Commission (EEOC). The EEO-1 report race/ethnicity categories are Hispanic or Latino, White, Black or African-American, Native Hawaiian or Other Pacific Islander, Asian, Native American or Alaska Native, and Two or More Races. The Hispanic or Latino category in EEO-1 incorporates Hispanics or Latinos of all races. For our report, we used the following categories: Hispanic, White, African-American, Asian, and “Other.” We included only non-Hispanic members under White, African-American, Asian, and “Other.” We included Asian American, Native Hawaiian or Pacific Islander under the Asian category, and we included Native American or Alaskan Native, and Two or More Races under “Other.” To provide more recent data on gender composition, we also analyzed data on the gender of directors who were on boards as of October 17, 2018. Specifically, we compiled a list of board directors who started or continued their terms on the boards in 2018, based on board director information from the banks’ 2017 Form 10-K filings with the Securities and Exchange Commission (SEC). The filings include the names and brief biographies of board directors, which we used to derive the gender data for directors. For example, if directors were referred to as “Mr.” in the Form 10-Ks, we counted them as male. If they were referred to as “Ms.,” we counted them as female. We then confirmed with each FHLBank the compiled list of board directors, as of October 17, 2018. Because some directors did not self-identify their gender in 2015–2017 annual reports, we also used information in the banks’ 2014–2016 Form 10-Ks to derive data on the gender of the banks’ board directors. As a result, we were able to report the gender information for all FHLBank board of directors from 2015 through October 2018. We separately requested the names of the chairs and vice chairs for the committees of each bank’s board as of October 26, 2018. We then derived the gender of the chairs and vice chairs for these committees based on the information in the banks’ Form 10-Ks. To analyze data on board director race/ethnicity, we relied on FHLBanks’ 2015–2017 annual reports. However, we were not able to use banks’ Form 10-Ks to derive data on race/ethnicity for board directors who did not self-identify race/ethnicity in the annual reports because the 10-Ks do not include such information. We also requested and analyzed from each bank data on the gender and race/ethnicity of their board chair and vice chair as of October 17, 2018. We assessed the reliability of the data from the banks’ annual reports and Form 10-Ks through electronic testing, a review of documentation, and interviews with knowledgeable agency staff, and we determined these data to be sufficiently reliable for describing the overall trends and composition of gender and race/ethnicity at the FHLBank boards, except the data for directors who did not self- identify their race/ethnicity, as discussed in the report. We also compared the most recently available demographic information on FHLBank board directors with the demographic composition of senior management in the financial services industry and the overall private sector (excluding financial services), based on data from the 2016 EEO-1 report from EEOC. Senior management in the financial services industry represents a pool of comparable candidates that could provide directors for FHLBank boards. The EEO-1 report data are annually submitted to EEOC by most private-sector firms with 100 or more employees. The data include gender and race/ethnicity of the employees by job category. We included workforce from all sites of multi-establishment companies (companies with multiple locations). Consequently, the analysis included in this report may not match the analysis found on EEOC’s website, which excludes workforce from sites of multi-establishment companies with less than 50 employees. In our analysis of senior management-level diversity in the financial services sector, we included companies in the finance and insurance industry categorized under code 52 of the North American Industry Classification System. We assessed the reliability of the data from the EEO-1 report through electronic testing, a review of documentation, and interviews with knowledgeable agency staff. We determined these data to be sufficiently reliable for comparing the composition of gender and race/ethnicity in the financial services sector and the overall private sector with that of the FHLBank boards. Furthermore, to provide a general comparison of FHLBank board diversity composition with corporate boards of U.S. companies, we reviewed research that discussed data related to diversity at corporate boards of U.S. companies in recent years. In addition, from each FHLBank, we requested and reviewed the instrument they used to collect gender and race/ethnicity information from their board directors. We also obtained and reviewed information on the methods the banks used to distribute and collect the data collection instruments, and any instructions FHFA provided to the banks or that the banks provided to the board directors on collecting this information. We reviewed relevant information from the banks’ annual reports and relevant regulations on collecting and submitting board directors’ gender and race/ethnicity information. We also compared the banks’ data collection processes with relevant federal internal control standards. To determine the challenges the FHLBanks face and practices they use to recruit and maintain a diverse board, we interviewed staff at FHLBanks and FHFA to learn about the Bank Presidents Conference Board Diversity Task Force and the list of diversity practices compiled by the task force. We reviewed and analyzed the banks’ 2017 annual reports to learn about the most recent practices the banks implemented. We also reviewed the banks’ websites and bank documents, such as election materials and skills assessments for all 11 banks. In addition, we conducted semi- structured interviews with 10 board directors and one bank president, who act as representatives on the system-wide board diversity task force. We also conducted semi-structured interviews with a nongeneralizable sample of FHLBank board chairs from six banks (Atlanta, Boston, Des Moines, Pittsburgh, San Francisco, and Topeka). We selected these banks to achieve variation in board diversity composition (share of women and minority directors), asset size, and geographic locations. In these interviews, we asked directors and staff about the challenges their banks faced as they sought to increase or maintain diverse boards. We also asked about their participation on the task force, the task force diversity practices, and any other practices their banks had implemented related to board diversity efforts. To determine if the task force diversity practices generally followed commonly cited practices used to improve board diversity, we compared the task force practices against commonly cited practices we identified in previous work in 2015. To verify that the practices we identified in 2015 were still relevant and useful, we interviewed three academics and representatives of four organizations that advocate for board diversity, including gender and racial/ethnic diversity. We selected these external stakeholders based on their research and experience related to increasing board diversity and referrals from others knowledgeable in the field. In our interviews with external stakeholders, we also asked about the challenges that financial organizations or other publicly traded companies may face as they work to increase or maintain board diversity. We compared these answers to the challenges that FHLBank representatives described. We conducted this performance audit from July 2018 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Number of Board Directors at Federal Home Loan Banks, by Gender and by Race/Ethnicity Appendix III: Comments from the Federal Housing Finance Agency Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Anna Maria Ortiz, (202) 512-8678, ortiza@gao.gov. Staff Acknowledgments In additional to the individual named above, Kay Kuhlman (Assistant Director), Anna Chung (Analyst in Charge), Laurie Chin, Kaitlan Doying, Jill Lacey, Moon Parks, Barbara Roesmann, Jessica Sandler, and Jena Sinkfield made key contributions to this report.
Why GAO Did This Study The FHLBank System consists of 11 regionally based banks cooperatively owned by member institutions. In 2018, each FHLBank had a board of 14–24 directors. Member directors are nominated from member institutions and independent directors from outside the system. Member institutions vote on all directors. At least two independent directors on a board must represent consumer or community interests. FHFA is the regulator of the FHLBanks. GAO was asked to review FHLBanks' implementation of board diversity and inclusion matters. This report examines (1) steps FHFA took to encourage board diversity at FHLBanks; (2) trends in gender, race, and ethnicity on FHLBank boards; and (3) challenges FHLBanks face and practices they use to recruit and maintain diverse boards. GAO analyzed FHLBank data on board demographics, reviewed policies and regulations, and reviewed previous GAO work on diversity at FHLBanks and the financial services industry. GAO interviewed FHFA and FHLBank staff and a nongeneralizable sample of FHLBank board directors and external stakeholders knowledgeable about board diversity. What GAO Found The Federal Housing Finance Agency (FHFA) has taken formal and informal steps to encourage board diversity at Federal Home Loan Banks (FHLBank) since 2015. For example, FHFA required FHLBanks to add board demographic data to their annual reports; clarified how banks can conduct outreach to diverse board candidates; and allowed some banks to add an independent director. Since 2015, the share of women and minority directors on the boards of FHLBanks increased (see figure). The number of women directors increased from 34 in 2015 to 44 in October 2018, and the number of minority directors increased from 20 in 2015 to 30 in 2017, based on most recently available data. Trends for minority directors were less clear, because the banks' varying data collections processes did not always allow them to determine the extent to which directors opted out or forgot to answer data collection forms. FHFA stated that it planned to use board data to establish a baseline to analyze diversity trends. A review of the banks' data collection processes would help identify whether practices exist that could help improve the completeness of the data. FHLBanks reported they continued to face some challenges to their efforts to promote board diversity, especially among member director seats. The challenges include (1) balancing the addition of new women or minority directors with retaining the institutional knowledge of existing directors; and (2) competing with other organizations for qualified female and minority board candidates. Despite reported challenges, FHLBanks have taken measures to promote board diversity, such as establishing a task force to promote board diversity through information sharing and training. Individually, the FHLBanks emphasized the importance of diversity in election materials, built pools of diverse candidates, and conducted outreach to industry and trade groups. They also took actions to increase diversity specifically among member directors, including filling interim board seats with women and minority candidates and encouraging directors to personally reach out to potential women and minority candidates. What GAO Recommends GAO recommends that FHFA, in consultation with FHLBanks, review data collection processes for board demographic information and communicate effective practices to banks. FHFA agreed with GAO's recommendation.
gao_GAO-20-356
gao_GAO-20-356_0
Background The Marine Corps’ Marine Helicopter Squadron One (HMX-1) currently uses a fleet of 23 helicopters to support the President in the national capital region, the continental United States and overseas. In April 2002, the Navy began developing a replacement helicopter later identified as the VH-71. Schedule delays, performance issues, and a doubling of estimated acquisition costs from $6.5 billion to $13 billion prompted the Navy to terminate the VH-71 program in 2009. Our prior work found that the VH-71 program’s failure to follow acquisition best practices was a critical factor in the program’s poor performance that led to its ultimate termination. In the case of the VH-71, the Navy had a faulty business case, did not perform appropriate systems engineering analysis to gain knowledge at the right times, and failed to make necessary trade-offs between resources and requirements even after years of development. Because of these failures, the program was unable to achieve a stable design and experienced significant cost growth and schedule delays. Although the prior replacement program was terminated, the need for a replacement helicopter remained. As a result, the Navy initiated a follow- on replacement program in 2010. In April 2012, the Secretary of Defense approved the Navy’s plan based on the modification of an in-production helicopter to meet Navy requirements. The VH-92A is expected to provide improved performance, survivability, and communications capabilities, while offering increased passenger capacity when compared to legacy helicopters. In May 2014, the Navy competitively awarded a contract to Sikorsky to develop the VH-92A, which included options for production. The $2.7 billion contract includes a fixed-price incentive (firm target) Engineering and Manufacturing Development (EMD) phase and a firm-fixed price production phase with options for three lots for 17 helicopters, spares and support equipment. Under the EMD phase, Sikorsky has delivered two development test helicopters which were used in an operational assessment that was completed in April 2019. Additionally, Sikorsky has delivered three of four System Demonstration Test Article (SDTA) production representative helicopters that are being used in developmental testing and that will also be used to evaluate the VH-92A’s operational effectiveness and suitability during the program’s Initial Operational Test and Evaluation (IOT&E). The fourth SDTA helicopter is to be delivered in May 2020 and will also be used to conduct IOT&E. In June 2019, the Assistant Secretary of the Navy, Research, Development and Acquisition (RD&A) approved the program to begin low-rate initial production of the helicopters and authorized the program to exercise the contract options for the first two low-rate production lots. Shortly thereafter, the Navy exercised the Lot l option for 6 helicopters, initial spares, and support equipment for $542 million. Those helicopters, initial spares and support equipment are all to be delivered in calendar year 2021. In February 2020, the Navy exercised the Lot ll option for $471 million for 6 additional helicopters and associated support equipment. All of these helicopters and support equipment will be delivered in calendar year 2022. The Navy had planned for two years of low-rate initial production of 6 helicopters each year followed by one year of full-rate production for the remaining 5 helicopters. The Navy’s acquisition strategy in support of the production decision included a change in that plan with the re-designation of full-rate production as a third lot of low-rate production. A key reason for the change is that the planned full-rate production run of 5 helicopters was too small to achieve the potential cost benefits of full-rate production, which typically involves purchasing a sufficient number of helicopters to decrease unit cost. This revised strategy would also enable the Navy to award the third production lot seven months earlier than the originally planned May 2021. Before obligating the funding available for the second lot, the program office had to brief the Assistant Secretary of the Navy (RD&A) on various elements of the VH-92A’s performance. The program office is required to obtain approval from the Assistant Secretary of the Navy (RD&A) for the procurement of the last lot (Lot lll) with a decision brief that includes, among other things, the status of IOT&E. Building a VH-92A helicopter involves work at three facilities. To begin the production process, Sikorsky takes an S-92A helicopter from its commercial production line in Coatesville, Pennsylvania and flies it to a dedicated VH-92A modification facility in Stratford, Connecticut. Once there, Sikorsky removes some components, such as circuit breaker panels, engines, and main and tail rotor blades and replaces them with VH-92A components. Additionally, Sikorsky modifies the helicopter to accommodate VH-92A specific subsystems, including racks and wiring for a Navy-developed mission communications system (MCS). Sikorsky then flies the helicopter to a dedicated facility in Owego, New York where it integrates the MCS, installs the executive cabin interior, paints the helicopter, and conducts final testing before delivering the helicopter to the government. See figure 1 for a depiction of modifications of the commercial S-92A helicopter to the VH-92A presidential helicopter. Prior GAO Work on VH- 92A Acquisition We have reported annually on the Navy’s effort to replace the current fleet of presidential helicopters since 2011. Our reports highlighted, in part, the extent to which the Navy used the lessons learned from the failed VH-71 program—the need to balance requirements, costs, and schedule and the importance of establishing a knowledge-based program that is aligned with acquisition best practices—in its new effort. For example, our 2011 report found that while the replacement program was early in its development cycle, the Navy’s initial efforts appeared to reflect the intent to pursue a best practices aligned knowledge-based acquisition. Following the program’s entry into the EMD phase of acquisition in April 2014, we found that the Navy’s reliance on mature technologies, selection of an existing helicopter for use in the program, and award of a fixed-price incentive type contract reduced risk. As to be expected with a major system development effort, however, we found the program still faced a number of technical challenges. In four reports issued from 2016 to 2019, we found that the Navy continued making progress in developing the VH- 92A helicopter while managing design, integration and technical challenges. Some key technical risk and challenges that we previously identified are summarized in table 1. We discuss the current status of the Navy’s efforts to address these challenges later in the report. Estimated Program Costs Have Decreased by 10 Percent In April 2019, the Navy estimated that the VH-92A would cost about $4.9 billion to develop and produce and about $15.6 billion to operate and support the helicopters through fiscal year 2062. Overall, the Navy’s $20.5 billion estimate reflects a 10-percent reduction from the program’s 2014 baseline estimate (see table 2). The Navy and contractor officials worked together to remain within the program’s April 2014 cost baseline, in part, by keeping requirements stable, limiting the design changes, and taking advantage of cost saving initiatives. For example, the Navy has not added any key performance requirements to the fixed-price incentive contract since it was awarded in 2014. The Navy has, however, implemented a small number of design changes to add an additional cockpit display and increase the height of the upper portion of the forward aircraft door. Previously, we found that cost saving initiatives included leveraging the Federal Aviation Administration’s airworthiness certification process, optimizing work processes, and reducing the movement of helicopters between contractor sites. In addition, the Navy attributes the reduction in cost to support the VH- 92A fleet to using a planned maintenance interval concept as the basis for its April 2019 cost estimate. Program officials explained that the April 2014 baseline estimate was based on the approach used to maintain the current fleet of VH-3D and VH-60N presidential helicopters. For these helicopters, the contractor carries out depot-level maintenance by disassembling, inspecting, and reassembling them at its maintenance depot. However, for the VH-92A, the Navy intends to perform depot-level maintenance itself through scheduled inspections at its own presidential helicopter support facility, which was designed to support this approach. As a result, the Navy expects to be able to support the VH-92A fleet in a more cost-effective manner while ensuring the availability of the helicopter to perform its mission. Upcoming Initial Operational Test and Evaluation Will Demonstrate Extent to Which Technical Issues Have Been Addressed as Program Approaches End of Development The program has made progress addressing technical risks and performance challenges we discussed in prior reports and deficiencies confirmed during the April 2019 operational assessment. According to program officials, solutions for these performance shortfalls, except for the landing zone suitability issue, have been developed and successfully tested during integrated testing and will be evaluated during the 3-month IOT&E test scheduled to begin in June 2020. The program is pursuing options to achieve landing zone suitability that include possible changes in operational procedures, helicopter design, and lawn surface treatments. If design modifications are required, they will not be implemented until after IOT&E. As a result, the Navy may not be able to fully demonstrate that the VH-92A helicopter meets all its key requirements until after the test program is complete. Further, IOT&E results may also identify additional issues that may require additional design or software changes. Depending on the severity of the issues, the Navy may need additional time to test and incorporate changes into the helicopter, including those helicopters currently in production. VH-92A Program Is Addressing Performance Shortfalls Previously Identified in Testing The program office has mitigated or reduced risk on some technical issues we discussed in prior reports. For example, according to program documents, the program has mitigated the risk in the following areas: helicopter start procedures, electromagnetic environment effects/ electromagnetic pulse and cybersecurity. The Navy assessed these capabilities during earlier developmental test and during the operational assessment, which concluded in April 2019; subsequently, the Navy approved the program to enter into production. However, the operational assessment confirmed other known performance shortfalls—specifically those associated with the MCS—that, if not corrected, could prevent the program from meeting certain operational requirements. The MCS replaces the communications suite currently used by the in- service fleet and provides VH-92A passengers, pilots, and crew with simultaneous short- and long-range secure and non-secure voice and data communications capabilities. As such, its performance is critical for the VH-92A to meet its mission. To conduct its operational assessment, the Navy used two development test helicopters and a developmental version of MCS software with known performance and capability limitations. The operational assessment confirmed these MCS-related performance limitations, including dropped communication connections. Navy officials noted that these and other MCS-related performance shortfalls could, if not addressed, reduce the helicopter’s availability to perform its transport mission and lower overall reliability, among other operational requirements. Overall, the operational assessment confirmed 24 MCS-related performance limitations. According to program officials, they have incorporated or identified fixes to 22 of the 24 issues, which they are now testing on SDTA helicopters. In turn, these fixes are expected to be incorporated into MCS software that will be tested during IOT&E. According to program officials, the remaining two MCS issues are related to bandwidth and an unreliable off-aircraft network configuration affecting on-aircraft system performance. According to those officials, the VH-92A is already equipped with a wide-band line-of-sight system that provides high bandwidth, though with coverage limitations. The program is conducting market research on how to provide the helicopter with increased bandwidth with increased coverage. The remaining two issues were assessed earlier as having a serious (but not critical) impact to mission accomplishment. In addition to the MCS deficiencies, the helicopter experienced problems with other components during the April 2019 operational assessment. For example, the mission and maintenance data computer repeatedly sent out false warning alarms/notifications, which affected the reliability and required the aircrew to spend extra time troubleshooting or switch to a backup helicopter. A software update to help address this issue is planned for the computer prior to IOT&E. The program is also still working to demonstrate the ability of the helicopter to meet a key system capability requirement to land the helicopter without damaging landing zones (including the White House South Lawn). For landing zone suitability, the program’s objective has been to assess the downwash and exhaust effects on the landing zone. In a September 2018 training event, the Navy found that VH-92A’s exhaust damaged a landing zone. Program officials stated that the training event did not represent a typical operational scenario since the lawn was exposed to the helicopter’s exhaust for a longer period than it would be under normal operating conditions. The program is studying solutions to minimize risk of landing zone damage including possible changes in operational procedures, helicopter design, and lawn surface treatments. For example, the contractor developed a prototype design change to the helicopter’s auxiliary power unit to deflect exhaust. Flight testing of the prototype design change was conducted in March 2020 with analysis of the results expected in April 2020. Navy officials stated the contractor is also conducting testing to determine if changes in helicopter and/or engine operating procedures can mitigate the risk of landing zone damage. According to both program officials and contractor representatives, a decision on potential solutions will be made prior to IOT&E. If design modifications are required, they will not be implemented until after IOT&E. Program Schedule Has Slipped Further but Remains within the Original Approved Schedule Thresholds Initial operational testing of the VH-92A, which will be used to evaluate operational effectiveness and suitability of the helicopter, training system, support equipment, upgraded MCS software and other changes implemented to address previously identified issues, is now scheduled to be conducted between June and September 2020. As such, IOT&E will be conducted about 3 months later than we reported in 2019, but is expected to be completed by the threshold (latest acceptable) date in the Navy’s April 2014 baseline. Program officials attributed the 3-month delay to the need to develop MCS hardware and software changes that are currently being tested. Should IOT&E demonstrate that efforts to address the MCS performance issues or other previously identified issues are insufficient—or if the testing identifies new issues that result in the program being unable to meet its operational requirements—then the program may need to identify, test and incorporate changes into the VH- 92A’s design and into the helicopters already in production, further delaying the program and increasing associated costs. As previously noted, the first delivery of the helicopters ordered under the first production option is scheduled to begin in April 2021. As a result of the revised IOT&E test schedule, the program office has also delayed the initial operational capability (IOC) milestone, which clears the helicopter to enter service, by 3 months to January 2021. This new date represents a total delay of 6 months from the original date but still remains within the IOC threshold date established in April 2014. Figure 2 compares the program’s 2019 schedule with the 2014 baseline schedule and the 2018 schedule we reported on last year. Program officials acknowledged that if there is a delay in the program that results in the program breaching a schedule threshold established in its acquisition baseline, they would need to submit a program deviation report to the Assistant Secretary of the Navy (RD&A). In turn, the program may need to keep certain staff in place longer than originally planned, potentially increasing program costs. However, program officials told us that the program can cover any additional costs with existing funding. Further, Navy officials stated that should IOC be delayed, the Navy will continue to use its existing fleet of presidential helicopters as the VH-92A transitions into the HMX-1 fleet. Navy officials indicated that the transition process will be gradual, and that the existing fleet is sufficiently funded until HMX-1 completes the transition. Agency Comments We are not making any recommendations in this report. We provided DOD with a draft of this report. DOD provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or DiNapoliT@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix l. Appendix l: GAO Contact and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact above, Bruce H. Thomas, Assistant Director; Marvin E. Bonner; Bonita J.P. Oden; Alexander Webb; Peter Anderson; Robin Wilson; and Marie Ahearn made key contributions to this report. Related GAO Products Presidential Helicopter: Program Continues to Make Development Progress While Addressing Challenges. GAO-19-329. Washington, D.C.: April 11, 2019.* Presidential Helicopter: VH-92A Program Is Stable and Making Progress While Facing Challenges. GAO-18-359. Washington, D.C.: April 30, 2018.* Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-17-333SP. Washington, D.C.: March 30, 2017.* Presidential Helicopter: Program Progressing Largely as Planned. GAO-16-395. Washington, D.C.: April 14, 2016.* Presidential Helicopter Acquisition: Program Established Knowledge- Based Business Case and Entered System Development with Plans for Managing Challenges. GAO-15-392R. Washington, D.C.: April 14, 2015.* Presidential Helicopter Acquisition: Update on Program’s Progress toward Development Start. GAO-14-358R. Washington, D.C.: April 10, 2014. Department of Defense’s Waiver of Competitive Prototyping Requirement for the VXX Presidential Helicopter Replacement Program. GAO-13-826R. Washington, D.C.: September 6, 2013. Presidential Helicopter Acquisition: Program Makes Progress in Balancing Requirements, Costs, and Schedule. GAO-13-257. Washington, D.C.: April 9, 2013. Presidential Helicopter Acquisition: Effort Delayed as DOD Adopts New Approach to Balance Requirements, Costs, and Schedule. GAO-12-381R. Washington, D.C.: February 27, 2012. Defense Acquisitions: Application of Lessons Learned and Best Practices in the Presidential Helicopter Program. GAO-11-380R. Washington, D.C.: March 25, 2011. *GAO issued these reports on the VH-92A program in response to a provision in National Authorization Defense Act of 2014.
Why GAO Did This Study The mission of the presidential helicopter fleet is to provide safe, reliable, and timely transportation in support of the President. The Navy plans to acquire a fleet of 23 VH-92A helicopters to replace the current Marine Corps fleet which has been in use for more than 40 years. Delivery of production VH-92A helicopters is scheduled to begin in April 2021 and be completed in January 2023. The National Defense Authorization Act of 2014 included a provision for GAO to report annually on the acquisition of the VH-92A helicopter. This report, GAO's sixth related to the provision, examines (1) the extent to which the program is meeting cost goals and (2) performance and schedule challenges that the program has experienced. To conduct this work, GAO compared the Navy's April 2019 cost estimates for acquiring and maintaining the new helicopters and October 2019 program schedule information to its April 2014 acquisition baseline. GAO reviewed development test results and status reports from the program. GAO also interviewed officials from the program office, Navy test organizations, and the contractor. GAO is not making any recommendations in this report. What GAO Found The Navy estimates the cost to develop, procure, and maintain the VH-92A ® over its 40-year operational life to be just over $20.5 billion, or about 10 percent less than the Navy's 2014 baseline estimate (see table). Navy and contractor officials worked to remain within the program's April 2014 cost baseline estimate, in part, by keeping program requirements stable, limiting design changes, and taking advantage of cost saving initiatives. The Navy also plans to use Navy personnel and facilities to perform depot-level maintenance for the VH-92A fleet, rather than sending the helicopters back to the contractor as is currently done. The program has made progress addressing technical risks and performance challenges GAO discussed in prior reports; however, an April 2019 operational assessment confirmed several other risks that could affect the helicopter's ability to meet its reliability and availability requirements. For example, Navy officials stated that the assessment confirmed known limitations with the mission communications system. Upgraded software intended to address those limitations is to be evaluated during the initial operational test and evaluation scheduled to be conducted between June and September 2020. The results of that testing could impact the Navy's planned January 2021 decision to begin using the helicopters as part of the presidential helicopter fleet.
gao_GAO-19-464
gao_GAO-19-464_0
Background Indian Tribes and Tribal Land Types As of May 2019, the federal government recognized 573 Indian tribes as distinct, independent political communities with certain powers of sovereignty and self-government, including power over their territory and members. The tribes can vary greatly in terms of their culture, language, population size, land base, location, and economic status. As of the 2010 U.S. Census, about 21 percent, or 1.1 million, of all American Indians lived on tribal lands. Tribal lands include many land types (see table 1). According to BIA, the federal government holds about 46 million acres in trust for tribes (tribal trust land) and more than 10 million acres in trust for individual Indians (individual trust land). Some tribes also have reservations. According to BIA, there are approximately 326 Indian land areas in the United States administered as federal Indian reservations (including reservations, pueblos, rancherias, missions, villages, and communities). The land within the reservation may include a mixture of tribal trust land, individual trust land, restricted fee land, allotments, and land without trust or restricted status (that is, fee- simple land), which may be owned by tribes, individual Indians, or non- Indians. Agricultural Activity on Tribal Lands Agricultural producers (farmers, ranchers, or producers or harvesters of aquatic products) on tribal lands can be individual tribal members, the tribe itself, or non-Indians who lease the land from the tribe or Indian owner. According to USDA’s 2012 Census of Agriculture, about 75 percent of farms and ranches on 76 selected Indian reservations were operated by agricultural producers that identified as American Indian or Alaska Native (see table 2). On these reservations, Indian producers held 61 percent of total farm and ranch acreage. However, the total market value of agricultural products sold from Indian-operated farms and ranches was just over a tenth of that of non-Indian operated farms and ranches on the 76 selected reservations. In 2011, USDA, which operates several agricultural programs targeted to traditionally underserved populations, settled a class action lawsuit brought by Native American farmers and ranchers for $760 million (Keepseagle v. Vilsack). The lawsuit alleged that USDA discriminated against Native Americans in its farm loan and farm loan servicing programs. In 2018, $266 million of the remaining settlement proceeds were used to establish the Native American Agriculture Fund. The Fund will begin awarding grants in 2019 to fund the provision of business assistance, agricultural education, technical support, and advocacy services to Native American farmers and ranchers. Agricultural Credit and the Farm Credit System Like other businesses, agricultural producers generally require financing to acquire, maintain, or expand their farms, ranches, or agribusinesses. Types of agricultural loans as categorized by their purpose or maturity may vary by lender but generally include the following: Short-term loans. These loans are used for operating expenses and match the length and anticipated production value of the operating or production cycle. They are typically secured by the product (crops or livestock). Intermediate-term loans. These loans are typically used to finance depreciable assets such as equipment, which serves as the loan collateral. The loan terms usually range from 18 months to 10 years. Long-term loans. These loans are used to acquire, construct, and develop land and buildings with terms longer than 10 years. They are secured by real estate and may be called real estate loans. Several types of lenders provide credit to U.S. agricultural producers. According to USDA’s Economic Research Service, in 2017, FCS and commercial banks provided most agricultural credit in the United States, with respective market shares of 40 and 41 percent. USDA’s Farm Service Agency—a lender that focuses on assistance to beginning and underserved farmers and ranchers and also guarantees the repayment of loans made by other lenders—provided 3 percent, and the remainder was provided by individuals, life insurance companies, and other lenders. FCS is a government-sponsored enterprise, established in 1916 to provide sound, adequate, and constructive credit to American farmers and ranchers. FCS is regulated by FCA, an independent federal agency. FCS’s statutory mission includes being responsive to the needs of all types of creditworthy agricultural producers, and in particular, young, beginning, and small farmers and ranchers. According to FCA, FCS is not statutorily mandated to focus on providing financial opportunities to any other group. FCS lends money to eligible agricultural producers primarily through its 69 lending associations (FCS associations), which are funded by its four banks (FCS banks). All are cooperatives, meaning that FCS borrowers have ownership and control over the organizations. As of 2017, FCS had approximately $259 billion in loans outstanding, of which 46 percent were long-term real estate-based loans; 20 percent were short- and intermediate-term loans (such as for farm equipment or advance purchases of production inputs); and 16 percent were for agribusiness activities, such as agricultural processing and marketing. FCS associations are not evaluated under the Community Reinvestment Act, which requires certain federal banking regulators to assess whether financial institutions they supervise are meeting the credit needs of the local communities. FCS receives certain tax exemptions at the federal, state, and local level. Limited Data Are Available on Agricultural Credit Needs of Indian Tribes and Their Members Data on Agricultural Credit Needs for Tribes and Their Members Are Limited Little data exists on the credit needs of tribes and their members. One measure of unmet credit needs is the difference between the amount applied for and the amount received. However, we could not determine the amount of agricultural credit that Indian tribes and their members applied for or received. These data were limited in part because federal regulations historically have prohibited lenders from asking about the race of applicants for nonresidential loans, including agricultural loans. Additionally, even if data were available, the unmet need could be greater than that indicated by information on those who may have applied for and did not receive credit. Four tribal stakeholders and experts told us that tribal members may choose not to apply for agricultural credit because they were directly discouraged by loan officers, had problems completing paperwork, or had heard of other tribal members being denied loans. Two tribal agricultural experts told us that on some level, the agricultural credit needs of Indian tribes and their members are the same as other agricultural producers’ credit needs. In particular, tribal stakeholders and experts told us that the tribal members need short-term loans for operating expenses and intermediate-term loans for equipment. One difference between the agricultural credit needs of tribal members and other producers is that tribal members may have a greater unmet need for long-term loans, which are typically secured by real estate, because of difficulties in using tribal lands as collateral, as discussed later in this report. Credit needs vary based on the type of operation or borrower. Type of operation. Some tribal stakeholders we interviewed told us that members of their tribes were more likely to participate in ranching than farming, partly because farming has higher start-up costs. For example, one tribal agricultural expert told us a rancher can start with a few head of cattle and grow the herd over time, but a beginning farmer may need to purchase equipment. Additionally, several tribal stakeholders told us that land on their reservations was more suitable for ranching than farming. Type of borrower. Some tribes have agricultural businesses, which have credit needs different from those of individual tribal members, according to experts and BIA officials we interviewed. For example, they may be greater or more complex. According to an expert and a tribal stakeholder, established agricultural businesses likely would be able to receive credit from commercial lenders because they have more resources to pledge as collateral or stronger credit histories. Additionally, if a tribe has other profitable businesses, it likely will have less difficulty obtaining credit or financing agriculture with those other resources than those without such resources. According to tribal stakeholders, experts, and BIA officials we interviewed, tribal members who obtain agricultural credit likely receive it from USDA’s Farm Service Agency, other USDA programs, or Native CDFIs. Some tribal members receive agricultural credit from local private lenders, but they are typically larger, more established borrowers. One expert told us that tribal members who are smaller or beginning agricultural producers and cannot access commercial banks instead may borrow money from family members. A 2017 report found that Native business owners were less likely than other business owners to obtain start-up capital from banks. Some experts we interviewed cited Native CDFIs as growing providers of agricultural credit to tribal members. A 2014 survey of 41 Native CDFIs— credit unions, community banks, and loan funds—found more than 40 percent provided credit and training to farmers and ranchers. In total, these CDFIs made almost $6 million in agricultural loans annually. However, Native CDFIs are limited in how much agricultural credit they can provide. In the 2014 survey, 56 percent of the Native CDFIs that made agricultural loans reported not having enough capital for such loans, with a total unmet need of at least $3 million in the previous year. One Native CDFI we interviewed said its agricultural loans averaged about $100,000 per borrower, and another said its operating loans were about $50,000–$75,000 and its intermediate-term loans about $100,000. Stakeholders See Potential for Growth of Agricultural Activity on Tribal Lands That Could Require Access to Credit Selected literature we reviewed and interviews with some tribal stakeholders found that tribes have a growing interest in agriculture, motivated by concerns over tribal members’ access to food, health, and employment opportunities. Food access. A 2014 USDA study found that about 26 percent of individuals in tribal areas lived within 1 mile of a supermarket, compared to about 59 percent of all Americans. Health. According to the Centers for Disease Control and Prevention, American Indians and Alaska Natives have higher rates of obesity and diabetes than white Americans. Employment. A 2014 Interior report found that, on average, only about 50 percent of Native American adults in tribal statistical areas were employed either full or part-time. Two commissioned reports on tribal agriculture say that Indian tribes’ vast land base represents an untapped opportunity for tribes to increase agricultural production, including growing their own healthful foods and economic development. But, as previously discussed, for reservations featured in USDA’s 2012 Census of Agriculture, non-Indian producers received a large share of the agricultural revenue. Additionally, the agricultural products grown on tribal lands typically do not feed tribal members and instead are sold into the general agriculture commodity system. Furthermore, these reports and experts we interviewed noted that the growth of agriculture on tribal lands could require access to credit. For example, one tribal agriculture expert told us some tribes are interested in transitioning to “value-added” agriculture, which aims to help the community that produces raw agricultural materials capture the value of the products as they progress through the food supply chain (for example, by processing crops they grow or transitioning to more profitable products, such as organic). Value-added agriculture initiatives might require building facilities or acquiring more expensive inputs, and tribes likely would need financing to support these initiatives. According to some experts and a study we reviewed, if tribes and their members cannot access affordable credit, it could limit the growth of these initiatives. Stakeholders Reported That Tribes and Their Members Face Multiple Barriers to Obtaining Agricultural Credit on Tribal Lands Tribes and their members face several barriers to obtaining agricultural credit, including land tenure issues, administrative challenges, lenders’ legal concerns, and loan readiness issues. As a result, there is limited commercial lending on tribal lands. Land Tenure Issues May Present Hurdles to Obtaining Agricultural Credit Ten tribal stakeholders and experts we interviewed cited difficulties in using tribal lands as collateral as a barrier to obtaining credit because of federal laws or other constraints. Tribal trust and restricted fee lands. Federal law generally prohibits lenders from obtaining an ownership interest in tribal trust and restricted fee lands. As a result, tribes are not able to use their 46 million acres of tribal trust or restricted fee lands as collateral for a loan. However, tribes can lease such lands to other parties, including a tribal business or tribal member who wishes to use the land for agricultural purposes (lessees). These lessees can then pledge their “leasehold interest” in the lands as collateral for a loan, but may face challenges in doing so. For example, in general, leases of tribal trust and restricted fee lands must be approved by BIA and comply with its leasing regulations, which stipulate that agricultural leases generally have a maximum term of 10 years. While BIA generally allows leased tribal trust and restricted fee lands to be subject to a leasehold mortgage, three tribal stakeholders and experts we interviewed said that BIA’s maximum term for agricultural leases often was insufficient for obtaining an agricultural loan. Individual trust and restricted fee lands. Unlike tribal trust and restricted fee lands, the owners of individual trust and restricted fee lands can use these lands as collateral for a loan with permission of the Secretary of the Interior. However, many tracts of individual trust and restricted fee lands are allotments with fractionated ownership. According to nine tribal stakeholders and experts we interviewed, fractionated land is a barrier to agricultural activity and obtaining credit. Fractionated land occurs when an allottee dies without a will and ownership is divided among all the heirs, but the land is not physically divided. Thus, multiple owners (in some cases thousands) can have an ownership interest in the land and may have different ideas about how the land should be used. Interior estimated that out of the 92,000 fractionated tracts (representing more than 10 million acres), more than half generated no income in 2006–2011. For agricultural leases and leasehold mortgages on fractionated lands, BIA regulations require consent from owners of a majority interest in such lands. However, according to Interior, some allotments have thousands of co-owners, some of whose whereabouts are unknown, which could make it difficult to obtain their permission for an agricultural lease or a leasehold mortgage. Additionally, as a result of allotment, many Indian reservations contain different land ownership types, creating a “checkerboard” pattern of lands that can make the establishment and financing of large-scale agricultural projects difficult. For example, in addition to tribal and individual trust and restricted fee lands, reservations also may include lands that passed out of trust during the allotment period and were bought by non-Indians. Thus, multiple tracts within a large-scale agricultural project may need to be leased and financed separately because they have different owners and may be subject to different laws. This can also make legal jurisdiction unclear, which is a concern for private lenders financing projects on such lands, as discussed below. Experts and tribal stakeholders we interviewed reported that the barriers to collateralizing various types of tribal lands make it difficult for tribes and tribal members to access different types of agricultural loans. Most long- term loans—typically used for larger projects—generally need to be secured by real estate, which make these inaccessible to tribes and tribal members who do not have land that can be encumbered. For example, an Indian agricultural producer who operates on trust land and wants to build an agricultural facility for a value-added operation may not be able to obtain a long-term loan unless he or she has other unrestricted land to pledge as collateral. In addition, according to the former Executive Director of the Intertribal Agriculture Council, when most agricultural producers face economic distress, they can pledge land as security and receive an extended period of time (20–40 years) to pay off the debt. Tribal members may not have that option, making it difficult to obtain credit in an emergency (such as adverse weather). In addition, according to a tribal agriculture expert and three tribal stakeholders, tribal trust land is not counted as an asset on balance sheets, which may affect an agricultural lender’s assessment of a borrower’s creditworthiness for various types of loans. Administrative Process Delays May Deter Lenders and Borrowers Processes at Interior—particularly at BIA—can increase the amount of time it takes to obtain a loan, which can discourage both lenders and borrowers, according to tribal stakeholders and experts. Most of the tribal stakeholders and experts we interviewed told us that tribal members often encounter delays when seeking necessary documentation from BIA. For example, for loans involving trust or restricted fee lands, BIA needs to provide a title status report to the lender that identifies the type of land ownership and current owners. Two tribal stakeholders told us that BIA takes months to produce a certified title status report. By that time, the growing season could be over. A representative from a Native CDFI serving a tribe in the Great Plains said it can take years to receive these reports. BIA reported that in fiscal year 2017, it certified 95 percent of land titles within 48 hours. However, BIA’s performance on this measure has varied considerably over the last several years, and BIA officials told us that it can take significantly longer to process title status reports for complicated cases. Tribal members also can encounter administrative challenges at other points in the process. One Native CDFI representative told us she found out that BIA did not record a leasehold mortgage when the CDFI attempted to foreclose on the loan, which almost prevented the CDFI from recovering the loan collateral. In other cases, Interior’s Appraisal and Valuation Services Office might need to conduct an appraisal, such as for an agricultural lease. According to Interior policy, these appraisals should be completed within 60 days, but one tribal economic development expert said they routinely take much longer. Lenders Reported Having Legal Concerns about Recovering Collateral Involving Tribal Lands As a result of the unique legal status of tribes, some lenders, including FCS associations, reported concerns about their ability to recover loan collateral if the borrower defaulted on a loan involving tribal lands. Seven of the 11 FCS associations we contacted told us that they had legal concerns of this nature, and six of the associations said they had experienced the issues themselves. These concerns primarily arise from the following issues: Tribal sovereign immunity. Tribes are distinct, independent political communities with certain inherent powers of self-government and, as a result of this sovereignty, have immunity from lawsuits. A lender cannot sue to enforce the terms of a loan agreement with a tribe unless the tribe waives its sovereign immunity in connection with the agreement. Private lenders therefore might be hesitant to make a loan because they would not be able to sue the tribe if any disputes arose. We previously reported that tribes may waive sovereign immunity in agreements or contracts on a case-by-case basis and some tribes have formed separate companies to conduct business that are not immune from lawsuits. However, tribal government officials may decide that waiving the tribe’s sovereign immunity for purposes of enforcing the loan agreement is not in the tribe’s best interest. Additionally, tribal sovereign immunity would not bar lenders from seeking to foreclose on loans made to individual tribal members. Legal jurisdiction. Loans made to Indian tribes or their members and secured by tribal lands or collateral located on tribal lands may be subject to tribal laws, rather than state laws. In addition, it is sometimes unclear whether federal, state, or tribal courts would have jurisdiction in the event of a default or foreclosure. If tribal laws govern but do not adequately provide for the lender’s foreclosure, or if there is not a legal forum to hear the foreclosure lawsuit, lenders may be unable to recover the loan collateral. To address these types of concerns, some tribes have adopted secured transaction codes modeled after the Uniform Commercial Code, which can help to assure lenders of their ability to recover collateral in the event of default. Unfamiliarity with tribal laws. Laws and court systems vary among the nation’s 573 tribes, making it more difficult and costly for lenders to learn tribal laws. For example, one FCS association noted that it has many federally recognized tribes in its region, each of which may have different laws. If lenders have concerns regarding their ability to recover loan collateral in the event of a default, lenders may not make loans involving tribal lands due to concerns that the loan would not meet safety and soundness requirements. Potential Borrowers May Need Assistance with Loan Readiness Five tribal stakeholders we interviewed said some tribal members may need assistance—such as credit repair and technical assistance for loan applications—to become ready for agricultural loans. Some tribal members have no credit history, which can be a barrier to obtaining a loan. One study found that compared to off-reservation counterparts, reservation residents were more likely to have no credit history and when credit scores were available, they were lower on average. Many Native CDFIs provide credit builder or credit repair products to help tribal members qualify for larger loans, such as small business loans. Four tribal stakeholders we interviewed said members of their tribes sometimes need technical assistance to complete the paperwork required for agricultural loans, such as a business plan. One tribal member who owns a ranch told us that the first time he tried to apply for a loan, he had trouble completing the required paperwork and ultimately chose not to apply. He felt tribal members seeking credit would benefit from assistance in completing loan applications. One Native CDFI representative told us that her organization provides technical assistance to its borrowers to help them complete loan paperwork but noted that commercial lenders often did not provide these services. Barriers Have Limited Commercial Lending on Tribal Lands We and others have noted that the barriers described above have depressed commercial lending on tribal lands. In 2010, we found that banks were reluctant to do business on tribal lands because of the cumbersome procedures and their lack of experience. More recently, a report for the Department of Housing and Urban Development surveying lenders found that BIA processing times were a major challenge in making mortgage loans involving tribal lands. A Native CDFI representative told us that lenders have little incentive to engage in a lengthy underwriting process, particularly if the loan is for a small amount and if other potential borrowers have less complicated circumstances. Some experts have described tribal lands as “credit deserts.” For example, one study of three different areas of tribal lands found that few financial institutions or automated teller machines were located on these reservations. One Native CDFI representative told us that in her experience, many people on her reservation never had a bank account. She noted that when people do not have a bank account, it can be challenging for them to see themselves as potential borrowers. Similarly, our analysis found that the land tenure issues, administrative process delays, lenders’ legal concerns, and loan readiness issues can make agricultural loans involving tribal lands more time-consuming and costly to underwrite. For example, one FCS association told us that loans involving tribal lands require specialized legal analysis, which may be an additional expense that it would not incur for otherwise comparable loans. These same issues can increase a lender’s exposure to the risks inherent in agricultural lending because they can affect the borrower’s ability to repay the loan, the adequacy of the collateral to secure the loan, and the lender’s ability to recover the collateral in the event of a default. According to FCA, consistent with the purposes of the Farm Credit Act of 1971, the ability of a lender to collect loans is an important element of the institution’s safety and soundness, and the continued availability of credit. Finally, some stakeholders said they believe that discrimination also contributes to the lack of commercial lending on tribal lands. Four experts, a tribal stakeholder, and a BIA representative told us that they believe that some commercial lenders do not want to make loans involving tribal lands because of bias. As previously discussed, the plaintiffs in the Keepseagle case that USDA settled for $760 million alleged that USDA discriminated against Native American farmers and ranchers in certain programs. According to a tribal economic development expert, tribal members who face discrimination or other negative experiences with commercial lenders may share these experiences with other tribal members and deter them from applying for credit. FCS Laws Allow for Lending on Tribal Lands, and Some FCS Associations Reported Lending to Tribes or Tribal Members We found that FCS generally has authority to make loans involving tribal lands. Of the 11 FCS associations we contacted with tribal lands in their territories, some reported that they had recently made loans to Indian tribes or their members, and their outreach to these populations included support for agricultural education. FCS Laws Allow for Lending on Tribal Lands Generally, FCS has authority to provide a broad range of credit services to eligible agricultural producers, which may include tribes, tribal businesses, and individual tribal members operating on various types of tribal lands. However, borrowers must meet various eligibility and underwriting criteria that are required by law. For example, applicants for agricultural loans must be determined to be eligible borrowers, which means they must own agricultural land or be engaged in the production of agricultural products, including aquatic products. Also, long-term real estate loans (which have terms of up to 40 years) made by FCS institutions must be secured by a first-position lien on interests in real estate, thus enabling FCS to obtain ownership or control of the land in the event of default. FCA has determined that this statutory requirement can be satisfied, for example, with leasehold interests in real estate—such as that held by a tribal member leasing reservation land from a tribe—provided that the lease grants the borrower significant rights to the land, and the loan is made on a safe and sound basis. As noted earlier, BIA regulations often limit agricultural leases of tribal lands to a term of up to 10 years. In such cases, FCS associations similarly may limit the term of the related loan (to less than 10 years). According to FCA, when loans are for shorter terms than the leases, the FCS association’s first lien is preserved, as required by law, and the loan is prudent from a safety and soundness perspective. FCA has not issued written guidance indicating whether interests in other types of tribal lands—such as individual trust or restricted fee lands—also satisfy the requirement for a first-position lien on interests in real estate. However, FCA has the authority to determine what types of interests in real estate will satisfy this requirement. Also, according to FCA, there is no statutory requirement that short- and intermediate-term loans be secured with interests in real estate; such loans instead can be secured by other collateral, such as equipment, crops, livestock, and business revenues. In addition to making direct loans to agricultural producers, FCS has authority to lend to non-FCS institutions, such as commercial banks and credit unions, which in turn make agricultural loans to FCS-eligible borrowers. These other financing institutions are known as OFIs. According to FCA, the OFI lending authority allows FCS banks to fulfill their mission as a government-sponsored enterprise by enhancing the liquidity of OFIs, thereby lowering the cost of agricultural credit. As noted earlier, FCS is required to establish programs to serve young, beginning, and small farmers and ranchers, but it is not statutorily mandated to focus on providing financial opportunities to any other group of eligible agricultural producers. Notwithstanding the authorities described above, FCS must comply with other applicable laws and requirements. For example, FCS institutions are subject to safety and soundness oversight by FCA, including with respect to loan underwriting. FCS institutions also must comply with applicable federal, state, and tribal laws governing any tribal lands or property thereon used as loan collateral. FCS associations may obtain Farm Service Agency guarantees on loans to borrowers who otherwise may not meet FCS underwriting requirements. However, by law, loans made by FCS associations are not eligible for a similar BIA loan guarantee program. Some FCS Associations Reported Lending to Indian Tribes or Their Members, and Selected Associations’ Outreach to These Populations Included Education Lending Based on information from selected FCS associations located near tribal lands, some FCS associations have lent to Indian tribes or their members in the last 2 years. Of the 11 FCS associations we contacted with tribal lands in their territories, representatives of eight told us they had loaned to tribes or their members in the last 2 years—primarily to individual tribal members. We made the following observations based on the associations’ responses: Limited data on lending amounts. Representatives of 10 of the 11 FCS associations we queried stated that they either do not collect or do not maintain data on lending to specific racial populations, thus making it difficult to provide more detailed information on lending to Indian tribes and their members. However, four representatives provided estimates of their recent lending to this population on tribal lands. One association cited more than $25 million in total loans outstanding to a small number of tribes and tribal entities. Another association reported making about $5.5 million in new loans to tribes or their members on tribal lands in the last 2 years. A third reported a $3 million revolving line of credit to a family farm, and the fourth said it had made approximately $150,000 in five separate loans to two tribal members. Loan purposes. Seven associations reported on the type of credit they extended to Indian tribes and their members on tribal lands. In general, they made short-term operating loans and short- and intermediate-term loans for the purchase or refinance of items such as machinery and equipment, livestock, vehicles, or buildings and improvements. Two associations also reported making long-term real estate loans. The other association that reported lending to tribes or their members did not report on the types of loans it made. Type of collateral. Representatives of the eight associations that reported lending to tribes or their members all indicated that the associations secured loans with personal property, such as crops, livestock, or equipment. In addition, the associations that reported making real estate loans said they secured the loans with fee-simple land. Representatives of three FCS associations said they had not loaned to Indian tribes in the past 2 years. One association had not received any credit applications from tribal members, and another could not say if it had served tribal members because of a lack of racial data on borrowers. The third association had not provided loans to tribal members in the past 2 years, but the representative stated that it provided several letters of credit to guarantee the payments of BIA leases on tribal land. Although the FCS associations we contacted stated they have the resources to lend to tribes and their members on tribal lands, a few key factors affect their lending decisions. Representatives of all 11 FCS associations stated their associations had adequate financial capacity and resources to make potentially more complicated or time-consuming loans, such as those involving tribal lands. In general, they stated that the factors they consider in deciding whether to loan to Indian tribes or their members on tribal lands are the same as for any comparable loan—for example, creditworthiness, loan purpose, and the ability to secure a lien on collateral. However, as described earlier, some FCS association representatives described challenges related to tribal law, jurisdiction, tribal sovereign immunity, and recovery of collateral as complicating the lending process to Indian tribes and their members on tribal lands. Although three of the 11 FCS associations we queried reported making loans to tribes that had waived their sovereign immunity for those contracts, most loans the associations reported were to individual tribal members and secured by personal property or fee-simple land. According to two tribal stakeholders we interviewed, Indian tribes or tribal members who received loans from FCS or other commercial lenders may have larger agricultural operations, a longer credit history, and property that can be more easily used as collateral. For example, an established rancher may be able to secure operating loans with his or her cattle herd or interests in fee-simple land, thus preventing the need to rely on trust land as collateral. Outreach At the national level, FCS—through its trade association, the Farm Credit Council—conducts and facilitates outreach to tribes and tribal stakeholder groups. According to a representative of the Farm Credit Council, the Council and representatives of associations with tribal lands in their territories participate in an informal FCS working group focused on outreach and lending on tribal lands. One association representative described the group as sharing examples of lending success or reasons for missed opportunities; local, regional or national sponsorship opportunities; local or regional agricultural education events; and relevant legal proceedings, such as the Keepseagle settlement. At the institution level, FCS associations must prepare annual marketing plans describing, among other things, how they will be responsive to the credit needs of all eligible and creditworthy agricultural producers in their respective territories, with a focus on diversity and inclusion. The marketing plan must detail strategies and actions to market their products and services to potential borrowers who may not have been considered previously for reasons other than eligibility or creditworthiness. However, FCS associations are not required to achieve specific outcomes or quantifiable results. Our nongeneralizable review of the marketing plans of the 11 selected FCS associations with tribal lands in their territories and our analysis of their written responses to our queries for additional information found that outreach to tribes and their members focused on educational and charitable initiatives and direct marketing about agricultural lending, or did not directly target tribal populations. Seven of the 11 associations discussed actual or planned outreach to Indian tribes or their members in their marketing plans or written responses. Four of those seven associations cited financial support of specific agricultural education activities for tribes and their members. Two associations reported making charitable donations that benefited tribal members. Four of the seven associations reported direct marketing to potential tribal borrowers. However, in one case, the marketing was a one-time conversation with a tribe regarding financing for a new facility. The other three associations reported that they called potential Indian borrowers, sought referrals from existing tribal member customers, or conducted meetings with tribal government officials. In general, the four remaining associations, in their marketing plans and written responses, addressed outreach to minority producers through broader methods, such as participation in ethnic group organizations or through inclusion in the association’s overall outreach and marketing efforts. In addition, five of the 11 associations discussed outreach to minority producers in conjunction with their statutorily-mandated outreach to young, beginning, and small farmers. According to FCA officials, FCA’s guidance on providing credit to young, beginning, and small farmers, as well as to local food producers, would be broadly applicable to socially disadvantaged or minority populations that fall within the program definitions. Most of the tribal stakeholders with whom we spoke either were not familiar with FCS or did not know of the tribe or any of its members receiving FCS loans. One Native CDFI representative noted that although he was not familiar with any members of his tribe receiving FCS loans, he thought other nearby tribes or their members had worked with FCS. FCA also encouraged FCS associations to develop underwriting procedures to facilitate lending on Indian reservations. FCA identified one FCS association that developed such procedures, and another one of the associations we queried noted that they had such procedures. The first association provided an overview of its procedures, which identified links to information on borrower and collateral eligibility and actions that require BIA approval, among other topics. According to representatives of the second association, its procedure manual directs loan officers to treat tribal members’ applications for loans secured by personal property the same as any other applications. In addition, they said the manual contains instructions for working with BIA for real estate loans to tribal members on trust land and for making direct loans to tribes. Stakeholders Discussed Lender Partnerships, Loan Guarantees, and Other Options to Improve Agricultural Credit Access on Tribal Lands Our review of literature and interviews with experts, tribal stakeholders, FCS associations, Farm Credit Council representatives, and FCA officials identified the following options for improving access to agricultural credit on tribal lands. Partnerships with local lenders. Tribal economic development experts and tribal stakeholders cited the importance of commercial or government lenders partnering with Native CDFIs and other Indian- owned lenders, which are the most capable of navigating the challenges related to Indian agricultural credit. According to these experts and stakeholders, if larger commercial or government lenders worked with Native CDFIs or other tribal lenders (such as tribal banks or economic development corporations) to provide funds or conduct outreach, the tribal organizations could more efficiently reach Indian tribes and their members. They noted these organizations are familiar with tribal members and the administrative processes for obtaining loans on tribal land. Partnership with tribal lenders and other tribal businesses also could support tribes’ efforts to improve members’ loan readiness, according to literature we reviewed and a tribal economic development expert and a Native CDFI representative we interviewed. Commercial and government lenders may need to clarify whether tribal lenders with which they might partner meet their lending requirements. For example, although FCS banks have authority to lend to OFIs, which in turn can lend to FCS-eligible borrowers, only certain types of CDFIs may qualify as OFIs. In addition, this authority does not extend to long-term funding, and thus cannot be used to fund agricultural real estate loans made by OFIs. One FCS bank that commented on a 2004 FCA rule noted the latter statutory limitation as a major impediment to OFI program expansion. Flexibility with collateral requirements. As noted earlier, multiple stakeholders we interviewed discussed the challenges related to collateralizing trust land. In addition, FCA officials cited the need for a statutory change or clarification of the requirement that long-term loans made by FCS be secured by a first lien on interests in real estate. They said that by removing or clarifying this requirement, lenders would have authority to provide larger, longer-term loans to creditworthy tribes or tribal members who cannot mortgage their tribal lands. Guarantees. Some stakeholders we interviewed mentioned loan guarantees as an option to improve access to agricultural credit on tribal lands. For instance, FCA officials and Farm Credit Council representatives told us they had spoken with leadership of the Native American Agriculture Fund (created as part of the Keepseagle settlement) regarding the potential establishment of a loan guarantee fund, such as a first-loss fund, which would step in to purchase a loan in default (thus substantially reducing credit risk to the lender). In addition, three of the 11 FCS associations we queried identified guarantees as a possible way to increase FCS lending to Indian tribes and their members on tribal lands. FCS associations still face challenges in using guarantees. With regard to the first-loss loan guarantee fund, FCS associations still must adhere to the FCS statutory requirement for a first-position lien on interests in real estate for long-term loans. According to an FCA official, although the first-loss loan guarantee fund could mitigate repayment risk, a statutory change or clarification would be necessary for FCS associations to accept guarantees in lieu of real estate for long-term loans. And as noted earlier, FCS loans are statutorily ineligible for BIA’s loan guarantee program. Two FCS associations noted that removal of this restriction could increase FCS lending on tribal lands. Finally, FCA officials stated that challenges FCS associations face in making loans involving tribal lands also can extend to Farm Service Agency guarantees on those loans. In other words, to obtain such guarantees, FCS associations must navigate issues around land tenure, legal jurisdiction, and tribal laws. Tribal options. In addition, stakeholders discussed the following tribal actions that could increase credit access for tribes and their members: Representatives of two FCS associations noted that waivers of sovereign immunity (limited to specific contracts) by tribes may increase lending involving tribal lands, as it helps to enable lenders to enforce the terms of loans made to tribes. According to the Office of the Comptroller of the Currency, some banks have negotiated limited waivers of sovereign immunity (restricted to a specific transaction). As noted earlier, tribes may decide that waiving sovereign immunity is not in their best interest. In addition to the limited waivers of sovereign immunity, representatives of three FCS institutions stated that increased adoption of uniform commercial laws (such as the Uniform Commercial Code) by tribes could increase lending involving tribal lands. One tribal economic development expert told us that tribes that adopted their own leasing regulations under the HEARTH Act have seen substantially increased economic development. As noted earlier, the HEARTH Act provides tribes with greater flexibility to enter into leases for agriculture or other purposes. Once a tribe’s leasing regulations have been approved by the Secretary of the Interior, tribes may negotiate and enter into agricultural leases with 25-year terms without further approval by the Secretary. The combination of longer lease terms and the ability to conduct business outside of the BIA approval process can expedite the process of obtaining a leasehold mortgage on tribal trust and restricted fee land. As of May 1, 2019, the Secretary had approved agricultural leasing regulations for seven tribes under the HEARTH Act. Agency Comments We provided a draft of this report to FCA, Interior, and USDA for review and comment. FCA and USDA provided technical comments, which we incorporated as appropriate. In comments provided in an email, Interior officials noted that efforts to simplify the Secretary of the Interior’s approval process could provide faster mortgage determinations and thus may result in expanded lending and production opportunities for Indian agricultural producers. We are sending copies of this report to the appropriate congressional committees, the Chairman and Chief Executive Officer of the Farm Credit Administration, the Secretary of the Interior, and the Secretary of Agriculture. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology Our objectives in the report were to describe (1) what is known about the agricultural credit needs of Indian tribes and their members on tribal lands, (2) the barriers stakeholders and experts identified that Indian tribes and their members on tribal lands face in obtaining agricultural credit to meet their needs, (3) the Farm Credit System’s (FCS) lending authority and lending and outreach activities on tribal land, and (4) suggestions stakeholders have discussed to improve access to agricultural credit on tribal lands. For the purpose of this report, we use the term “tribal lands” to refer to reservations (including all land within the reservations’ boundaries), trust land, allotments, and restricted fee land. In general, our report focuses on the agricultural credit needs of tribes and their members in the lower 48 states. To describe what is known about the agricultural credit needs of Indian tribes and their members on tribal lands, we explored various potential data sources on agricultural loans that Indian tribes and their members applied for or received. We reviewed available data from the Consumer Financial Protection Bureau and Department of Agriculture (USDA). For example, we obtained borrower-reported loan data from USDA’s Agricultural Resource Management Survey, but for several data fields related to Indian producers on tribal lands, sample sizes were too small or the coefficients of variation were too high to produce reliable estimates. We also reviewed provisions of the Equal Credit Opportunity Act, federal regulations, and other legal documentation pertaining to collection of data regarding the personal characteristics of applicants for nonresidential loans. To describe what is known about Indian tribes and their members’ agricultural credit needs and the barriers they face in obtaining agricultural credit, we conducted a literature review. We conducted searches of various databases, such as EBSCO, ProQuest, Google Scholar, and Westlaw to identify sources such as peer-reviewed academic studies; law review articles; trade and industry articles; reports from government agencies, nonprofits, and think tanks; and Congressional transcripts related to tribal agriculture, barriers to accessing credit on tribal lands, and FCS. We identified additional materials through citations in literature we reviewed. In addition, we reviewed statutes and the Department of the Interior’s Bureau of Indian Affairs’ (BIA) regulations related to use and ownership of tribal lands, including leasing. To describe FCS’s authority and lending and outreach activities on tribal lands, we reviewed statutes and regulations governing FCS, as well as written guidance issued by the Farm Credit Administration (FCA). We also reviewed the marketing plans of a nongeneralizable sample of 11 FCS associations (16 percent of the 69 FCS associations that lend directly to agricultural producers) whose territories included large tribal land areas with high levels of agricultural activity, including the tribes we interviewed (described below). We selected an additional FCS association but on closer review realized it did not have a significant amount of tribal land in its territory; we therefore excluded this association from our analysis. For comparison purposes, we also reviewed three marketing plans from FCS associations that did not have significant tribal populations in their territories. In addition to reviewing the marketing plans, we sent the 11 FCS associations a questionnaire about their lending and outreach to tribes and their members and any challenges in making loans involving tribal lands. We also asked these associations about any suggestions to improve access to agricultural credit on tribal lands. We received responses from all 11 FCS associations, and followed up with some associations to clarify information they provided. While the sample allowed us to learn about many important aspects of FCS associations’ lending and outreach to tribes and their members on tribal lands, it was designed to provide anecdotal information, not findings that would be representative of all of 69 FCS lending associations. To address all four objectives, we attempted to interview representatives of six tribes. First, we selected these tribes to represent five regions (Great Plains, Rocky Mountain, Northwest, Southwest) and a state (Oklahoma) that—according to experts we interviewed—have tribes engaged in agricultural activity. Within these regions, we generally selected large tribal land areas that have high levels of agricultural activity, as indicated by the USDA 2012 Census of Agriculture data. Specifically, we selected tribes based on number of farms, land in farms, and market value of agricultural products. In addition, we selected one of the six tribes because two experts recommended that we speak with them. For the six tribes, we contacted tribal government leaders and employees of the relevant government offices, such as the agriculture or tribal lands departments. For two of the six tribes, we interviewed employees of the tribal agriculture department. One of these interviews also included representatives of the Native Community Development Financial Institution (Native CDFI) that serves the reservation. For the third tribe, we received written responses from a tribal farm. For the fourth tribe, we interviewed a representative of the Native CDFI that serves the reservation. For this series of interviews, we only received information relating to four tribes. We did not obtain meetings with relevant tribal government officials for the last two tribes. We also contacted farms or Native CDFIs associated with an additional three tribes based on USDA data or recommendations from experts we interviewed. For one of these tribes, we interviewed a tribal farm employee and a representative of the tribe’s community development corporation. For the second tribe, we interviewed a tribal farm employee. For the third tribe, we interviewed a representative of the Native CDFI that serves the reservation. In summary, we interviewed employees of two tribal agriculture departments, employees of three tribal farms, and representatives of three Native CDFIs and one tribal community development corporation. Throughout this report, we refer to tribal government employees, tribal farm employees, or representatives of Native CDFIs or community development corporations serving a tribe as “tribal stakeholders.” Although the information we obtained from the tribal agriculture employees allowed us to provide anecdotal tribal perspectives, it is not generalizable to the 573 federally recognized Indian tribes. In addition, the views of tribal farm employees and Native CDFI and community development corporation representatives cannot be generalized to tribes but illustrate views on needs, barriers, and other issues from the perspectives of the organizations. In addition, for all four objectives, we interviewed the following: Experts on agricultural and economic development on tribal lands. We interviewed subject matter experts on tribal agriculture and economic development from various organizations, including advocacy and academia. Specifically, we interviewed representatives of the following organizations: the Center for Indian Country Development at the Federal Reserve Bank of Minneapolis, First Nations Oweesta Corporation, the Indian Land Tenure Foundation, the Indigenous Food and Agriculture Initiative at the University of Arkansas, the Intertribal Agriculture Council, and the Native American Agriculture Fund. We selected these organizations based on relevant publications, testimonies before Congress, or recommendations from other experts. These organizations work with a number of tribes and thus could speak to general trends or commonalities in tribal agriculture and economic development. Throughout the report, we refer to the representatives of these organizations as “experts.” Agency and trade group representatives. We interviewed officials from FCA, USDA (including the Farm Service Agency, Economic Research Service, and National Agricultural Statistics Service), and BIA. We also interviewed representatives of the Farm Credit Council, the national trade association for the Farm Credit System. We conducted this performance audit from December 2018 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Karen Tremba (Assistant Director), Lisa Reynolds (Analyst in Charge), Miranda Berry, Tom Cook, Anne-Marie Fennell, John Karikari, Marc Molino, Kirsten Noethen, Barbara Roesmann, Jeanette Soares, and Farrah Stone made significant contributions to this report.
Why GAO Did This Study About 46 million of the 56 million acres of the land that the federal government holds in trust for the benefit of Indian tribes and their members has an agricultural purpose. However, tribal agriculture and economic development experts have noted that Indian tribes and their members may need improved access to agricultural credit. Congress included a provision in statute for GAO to review the ability of FCS to meet the agricultural credit needs of Indian tribes and their members on tribal lands. This report describes (1) what is known about the agricultural credit needs of Indian tribes and their members, (2) barriers stakeholders identified to agricultural credit on tribal lands, (3) FCS authority and actions to meet those agricultural credit needs, and (4) stakeholder suggestions for improving Indians' access to agricultural credit on tribal lands. GAO explored potential data sources on Indians' agricultural credit needs, conducted a literature review, and reviewed statutes and regulations governing tribal lands and FCS. GAO also reviewed the marketing plans and written responses of a nongeneralizable sample of 11 FCS associations whose territories included tribal lands with high levels of agricultural activity. GAO interviewed stakeholders from a sample of seven tribes (generally selected based on tribal region and agricultural activity), experts in tribal agriculture and economic development (selected based on relevant publications, Congressional testimonies, and others' recommendations), and representatives from FCS and its regulator, the Farm Credit Administration, and other relevant government agencies. What GAO Found Limited data are available on the needs of Indian tribes and their members for agricultural credit, such as operating or equipment loans, to develop and expand agricultural businesses on tribal lands. Federal regulations have generally prohibited lenders from inquiring about the personal characteristics, such as race, of applicants on nonresidential loans. Some tribal stakeholders and experts said that tribal members may not have applied for agricultural credit because they heard of other tribal members being denied loans. They said that tribal members likely obtain agricultural credit from Department of Agriculture programs or tribal lenders. Another potential source of agricultural credit is the Farm Credit System (FCS), a government-sponsored enterprise that includes 69 associations that lend to farmers and ranchers. Tribal stakeholders and experts reported a general lack of commercial credit on tribal lands due to the following factors: Land use restrictions. Most tribal lands only can be used as loan collateral in certain circumstances or with federal permission. Administrative process delays. Tribal members reported often encountering delays obtaining necessary federal loan documents. Legal challenges. Lenders reported concerns about their ability to recover loan collateral due to the unique legal status of tribes. Loan readiness. Tribal members may have no or poor credit histories and be unfamiliar with the paperwork required for an agricultural loan, such as a business plan. FCS is authorized to provide a range of credit services to eligible agricultural producers, which may include Indian tribes, tribal businesses, and tribal members. FCS associations must obtain land as collateral for long-term real estate loans, but are not required to do so for shorter-term loans, such as for operating costs or equipment purchases. Some FCS associations GAO contacted reported making loans to Indian tribes or their members. In a sample of 11 FCS associations with tribal lands in their territory, eight said they have loaned to tribes or their members in the past 2 years. GAO's review of these 11 associations' marketing plans and written responses to GAO follow-up questions found that seven noted outreach—such as support for agricultural education activities—targeted to tribes and their members. The other four reported broad and general outreach efforts that also included minority groups. To improve access to agricultural credit on tribal lands, stakeholders discussed several options. For example, some stakeholders discussed the potential for partnerships between commercial or government lenders and tribal lenders (such as Native Community Development Financial Institutions) and increased use of loan guarantees. Some stakeholders also discussed actions tribes could take to ease barriers to lending, such as adopting their own leasing procedures to reduce administrative processing time with federal agencies for certain loans.
gao_GAO-19-622T
gao_GAO-19-622T_0
Background Black Lung Benefits Black lung benefits include both cash assistance and medical benefits. Maximum cash assistance payments ranged from about $660 to $1,320 per month in 2018, depending on a beneficiary’s number of dependents. Miners receiving cash assistance are also eligible for medical benefits that cover the treatment of their black lung-related conditions, which may include hospital and nursing care, rehabilitation services, and drug and equipment charges, according to DOL documentation. DOL estimates that the average annual cost for medical treatment in fiscal year 2018 was approximately $9,667 per miner. There were about 25,600 total beneficiaries (primary and dependents) receiving black lung benefits during fiscal year 2018 (see fig. 1). The number of beneficiaries has decreased over time as a result of declining coal mining employment and an aging beneficiary population, according to DOL officials. Black lung beneficiaries could increase in the near term due to the increased occurrence of black lung disease and its most severe form, progressive massive fibrosis, particularly among Appalachian coal miners, according to National Institute for Occupational Safety and Health (NIOSH) officials. Benefit Adjudication Process Black lung claims are processed by the Division of Coal Mine Workers’ Compensation in the Office of Workers’ Compensation Programs (OWCP) within DOL. Contested claims are adjudicated by DOL’s Office of Administrative Law Judges (OALJ), which issues decisions that can be appealed to DOL’s Benefits Review Board (BRB). Claimants and mine operators may further appeal these agency decisions to the federal courts. If an award is contested, claimants can receive interim benefits, which are generally paid from the Trust Fund according to DOL officials. Final awards are either funded by mine operators—who are identified as the responsible employers of claimants—or the Trust Fund, when responsible employers cannot be identified or do not pay. In fiscal year 2018, black lung claims had an approval rate of about 34 percent, according to DOL data. In 2009, we reported on the benefits adjudication process and made several recommendations for DOL that could improve miners’ ability to pursue claims. An April 2015 DOL Inspector General (IG) report followed up on DOL’s progress on our recommendations and found continuing problems and raised new concerns about the black lung claims and appeals process. For instance, the IG reported that OALJ needed to address staff shortages, improve communication between its headquarters and district offices, and upgrade the training provided to judges and law clerks. To further expedite claim adjudication, the IG recommended, among other things, that OALJ begin hearing more cases remotely using video or telephone hearings to reduce judges’ travel costs and time. In fiscal year 2018, OWCP reported that it took about 335 days on average to issue a decision on a claim. This is an increase from the average of 235 days that OWCP had reported to the DOL IG for fiscal year 2014. Trust Fund Revenue and Expenditures Trust Fund revenue is primarily obtained from mine operators through the coal tax. The current coal tax rates, which took effect in 2019, are $0.50 per ton of underground-mined coal and $0.25 per ton of surface-mined coal, up to 2 percent of the sales price. Coal tax revenue is collected from mine operators by Treasury’s Internal Revenue Service and then transferred to the Trust Fund where it is then used by DOL to pay black lung benefits and the costs of administering the program. Trust Fund expenditures include, among other things, black lung benefit payments, certain administrative costs incurred by DOL and Treasury to administer the black lung benefits program, and debt repayments. When necessary for the Trust Fund to make relevant expenditures under federal law, the Trust Fund borrows from the Treasury’s general fund. When this occurs, the federal government is essentially borrowing from itself—and hence from the taxpayer—to fund its benefit payments and other expenditures. Trust Fund Borrowing Will Likely Continue to Increase through 2050 As we reported in 2018, Trust Fund expenditures have consistently exceeded revenue. The Trust Fund borrowed from Treasury’s general fund almost every year since 1979, its first complete fiscal year. We noted in our 2018 report that Trust Fund borrowing would continue to increase through 2050 due, in part, to the planned coal tax rate decrease of about 55 percent that took effect in 2019 and declining coal production. We simulated the effects of the tax rate decrease on Trust Fund finances through 2050, and reported the results of a moderate case set of assumptions related to future coal production and prices and the number of new black lung beneficiaries. These simulations were not predictions of what will happen, but rather models of what could happen given certain assumptions. Our moderate case simulation suggested that Trust Fund revenue may decrease from about $485 million in fiscal year 2018 to about $298 million in fiscal year 2019, due, in part, to the approximate 55 percent decrease in the coal tax rate. Our simulation, which incorporated EIA data on future expected coal production, also showed that annual Trust Fund revenue would likely continue to decrease beyond fiscal year 2019 due, in part, to declining coal production. Domestic coal production declined from about 1.2 billion tons in 2008 to about 775 million tons in 2017, according to EIA. Based on these projections, our moderate simulation showed that Trust Fund annual revenue may continue to decrease from about $298 million in fiscal year 2019 to about $197 million in fiscal year 2050. Future simulated Trust Fund revenue would likely be insufficient to cover combined black lung benefit payments and administrative costs, according to our moderate case simulation. Specifically, revenue may not be sufficient to cover beneficiary payments and administrative costs from fiscal years 2020 through 2050 (see fig. 2). For instance, in fiscal year 2029, simulated benefit payments and administrative costs would likely exceed simulated revenue by about $99 million. These annual deficits could decrease over time to about $4 million by fiscal year 2050 due, in part, to the assumed continued net decline in total black lung beneficiaries. If Trust Fund spending on benefit payments and administrative costs continues to exceed revenues each year, then the Trust Fund would need to continue borrowing from Treasury’s general fund to cover those costs, as well as borrowing to cover debt repayment. Our moderate simulation suggested that the Trust Fund’s outstanding debt could increase from about $4.2 billion in fiscal year 2019 to about $15.4 billion in fiscal year 2050 (see fig. 3). While our moderate case simulated a $15.4 billion Trust Fund debt in 2050, the amount could vary from about $6 billion to about $27 billion depending, in part, on future coal production and the number of new beneficiaries. Even if the Congress were to completely eliminate black lung benefits as of fiscal year 2019, the Trust Fund’s outstanding debt in fiscal year 2050 could still exceed $6.4 billion, according to our simulation. Eliminating black lung benefits, however, would generally mean that coal tax revenue would be collected solely to fund the repayment of Trust Fund debt. As we reported in 2018, other options such as adjusting the coal tax and forgiving interest or debt, could also reduce future borrowing and improve the Trust Fund’s financial position (see GAO-18-351). Preliminary Observations Raise Concerns About DOL’s Oversight of Coal Mine Operator Insurance Federal law generally requires that coal operators secure their black lung benefit liability. Operators can purchase commercial insurance for this purpose or may self-insure if they meet certain DOL conditions. For example, self-insurers must obtain collateral in the form of an indemnity bond, deposit or trust, or letter of credit in an amount deemed necessary and sufficient by DOL to secure their liability. DOL officials said that the collateral they required from the five self- insured operators that filed for bankruptcy between 2014 and 2016 was inadequate to cover their benefit liabilities. For example, the collateral DOL required from Alpha Natural Resources was about 6 percent of its estimated benefit liability. As a result, approximately $185 million of estimated benefit liability was transferred to the Trust Fund, according to DOL data. We reviewed DOL documentation related to the five operator bankruptcies. Table 1 shows the bankrupt operators; the amount of collateral each operator had at the time of bankruptcy; estimated benefit liability at the time of bankruptcy; and estimated benefit liability and number of beneficiaries that transferred to the Trust Fund, if applicable. Overall, three of these bankruptcies affected the Trust Fund, and two did not according to DOL. DOL officials told us that the bankruptcies of Arch Coal and Peabody Energy did not affect the Trust Fund because their benefit liabilities were assumed by the reorganized companies after emerging from bankruptcy. As of June 2019, there are 22 operators that are self-insured and actively mining coal, according to DOL officials. To ensure that the collateral they required from these operators was adequate to protect the Trust Fund, DOL officials said that they periodically reauthorized them which entailed, among other things, reviewing their most recent audited financial statements and claims information. DOL officials said that they prepared memos documenting these reviews and communicated with coal operators about whether their financial circumstances warranted increasing or decreasing their collateral. Table 2 provides information on the 22 self-insured operators including the date of each operator’s most recent DOL reauthorization; the amount of DOL required collateral; and the operator’s most recent estimated black lung benefit liability. Should any of these operators file for bankruptcy, they could also affect the Trust Fund because the amount of an operators’ benefit liability that is not covered by collateral could also become the responsibility of the Trust Fund. Preliminary analysis from our ongoing work indicates that DOL did not regularly monitor self-insured operators. Agency regulations state that DOL may adjust the amount of collateral required from self-insured operators when experience or changed conditions warrant. We reviewed DOL’s most recent reauthorization memos for each of the 22 operators. While some of these operators had been reauthorized more recently, we found that others had not been reauthorized by DOL in decades. One operator in particular had not been reauthorized by DOL since 1988. Additionally, for most of these operators, DOL either did not have estimates of their benefit liabilities, or the estimates were out of date (see table 2). Beginning in summer 2015, DOL officials said that they stopped permitting any new coal mine operators to self-insure as the agency worked with auditors, economists, and actuaries to develop new procedures for self-insurance. At the same time, DOL generally stopped reauthorizing the 22 self-insured operators. Earlier this year, two of these operators have filed for bankruptcy—Westmoreland Coal Company and Cloud Peak Energy—according to DOL officials. Additionally, due to deteriorating financial conditions, DOL recommended revoking another operator’s self-insurance authority (Murray Energy). However, Murray appealed this decision and DOL postponed responding to the appeal until their new self-insurance procedures are implemented, according to DOL officials. DOL’s new self-insurance procedures are currently being reviewed by OMB, and DOL officials said they did not know when they would likely be implemented. Until such procedures are implemented, DOL cannot ensure that the collateral it has required from self-insured operators is adequate to protect the Trust Fund should these operators become insolvent. Chairwoman Adams, Ranking Member Byrne, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff has any questions concerning this testimony, please contact me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Blake Ainsworth (Assistant Director), Justin Dunleavy (Analyst in Charge), Angeline Bickner, Alex Galuten, Courtney LaFountain, Rosemary Torres Lerma, Kate van Gelder, Catherine Roark, and Almeta Spencer made key contributions to the testimony. Other staff who made key contributions to the reports cited in the testimony are identified in the source products. Related GAO Products Black Lung Benefits Program: Options to Improve Trust Fund Finances, GAO-18-351 (Washington D.C: May 30, 2018). Mine Safety: Basis for Proposed Exposure Limit on Respirable Coal Mine Dust and Possible Approaches for Lowering Dust Levels, GAO-14-345 (Washington, D.C.: April 9, 2014). Black Lung Benefits Program: Administrative and Structural Changes Could Improve Miners’ Ability to Pursue Claims, GAO-10-7 (Washington, D.C.: October 30, 2009). Federal Compensation Programs: Perspectives on Four Programs for Individuals Injured by Exposure to Harmful Substances, GAO-08-628T (Washington, D.C.: April 1, 2008). Mine Safety: Additional Guidance and Oversight of Mines’ Emergency Response Plans Would Improve the Safety of Underground Coal Miners, GAO-08-424 (Washington, D.C.: April 8, 2008). Mine Safety: Better Oversight and Coordination by MSHA and Other Federal Agencies Could Improve Safety for Underground Coal Miners, GAO-07-622 (Washington, D.C.: May 16, 2007). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Since 2009, GAO has produced a body of work on the Black Lung Benefits Program. In 2018, for instance, GAO reported that the Trust Fund, which pays benefits to certain coal miners, faced financial challenges due, in part, to the coal tax rate decrease that took effect in 2019 and declining coal production. Trust Fund finances could be further strained by coal mine operator bankruptcies, as they can lead to benefit liabilities being transferred to the Trust Fund. This testimony describes Trust Fund finances through 2050 and provides preliminary observations from ongoing work for this committee regarding the Department of Labor's (DOL) oversight of coal mine operator insurance. To describe Trust Fund finances, in its 2018 report GAO developed simulations through 2050 based on various assumptions related to future coal production and the number of future black lung beneficiaries. To develop preliminary observations from its ongoing work, GAO analyzed DOL documentation and data on black lung beneficiaries and coal mine operators. GAO also reviewed relevant federal laws, regulations, policies, and guidance and interviewed DOL officials, insurance carriers, and coal mine operators, among others. What GAO Found GAO reported in 2018 that Black Lung Disability Trust Fund (Trust Fund) expenditures have consistently exceeded revenue. The Trust Fund borrowed from the Department of the Treasury's (Treasury) general fund and hence from the taxpayer almost every year since 1979, its first complete fiscal year, causing debt and interest to accumulate. Federal law does not limit the amount the Trust Fund may borrow as needed to cover its expenditures. Trust Fund revenue will be further limited by the coal tax rate decrease of about 55 percent that took effect in 2019, and declining coal production, according to GAO's simulation. Specifically, Trust Fund revenue may not be sufficient to cover beneficiary payments and administrative costs, from fiscal years 2020 through 2050. Therefore, the Trust Fund could need to continue borrowing to cover its expenditures—including the repayment of past debt and interest—and the Trust Fund's simulated outstanding debt could exceed $15 billion by 2050 (see figure). However, as GAO reported in 2018, various options, such as adjusting the coal tax and forgiving debt, could improve the Trust Fund's financial position. GAO's preliminary observations indicate that Trust Fund finances will be further strained by coal operator bankruptcies. Since 2014, an estimated black lung benefit liability of over $310 million has been transferred to the Trust Fund from insolvent self-insured coal mine operators, according to DOL data. Federal law generally requires that operators secure their black lung benefit liability. To do so, operators can self-insure if they meet certain DOL conditions. As of June 2019, there are 22 operators that are self-insured and actively mining coal, according to DOL officials. GAO's preliminary analysis indicates that DOL did not regularly review these operators so that it could adjust collateral as needed to protect the Trust Fund. As a result, the amount of collateral DOL required from some of these operators is tens of millions less than their most recent estimated black lung benefit liability. What GAO Recommends GAO will be considering recommendations, as appropriate, when ongoing work is finished.
gao_GAO-20-124
gao_GAO-20-124_0
Background ONDCP’s Responsibilities ONDCP was established by the Anti-Drug Abuse Act of 1988 as a component of the Executive Office of the President, and its Director is to assist the President in the establishment of policies, goals, objectives, and priorities for the National Drug Control Program. ONDCP is responsible for (1) leading the national drug control effort, (2) coordinating and overseeing the implementation of national drug control policy, (3) assessing and certifying the adequacy of National Drug Control Programs and the budget for those programs, and (4) evaluating the effectiveness of national drug control policy efforts. About a dozen National Drug Control Program agencies, as identified by ONDCP, have responsibilities for drug prevention, treatment, and law enforcement activities. Developing the National Drug Control Strategy Among other responsibilities, the Director of ONDCP is required to develop and promulgate the National Drug Control Strategy. The National Drug Control Strategy is to set forth a comprehensive plan to reduce illicit drug use and the consequences of such illicit drug use in the United States by limiting the availability of and reducing the demand for illegal drugs. Many of the SUPPORT Act’s requirements for the National Drug Control Strategy are the same as, or similar to, those that applied under the ONDCP Reauthorization Act of 2006. For example, both laws require the National Drug Control Strategy to include a 5-year projection for the National Drug Control Program and budget priorities. However, there are certain differences, and the SUPPORT Act includes a wide range of detailed new requirements that were not included under the ONDCP Reauthorization Act of 2006. One of these is that the National Drug Control Strategy include a description of how each comprehensive, research-based, long-range quantifiable goal established in the Strategy for reducing illicit drug use and the consequences of illicit drug use in the United States will be achieved. Other examples of new requirements include creating plans to increase data collection and expand treatment of substance use disorders. The SUPPORT Act also requires the Director to release a statement of drug control policy priorities in the calendar year of a presidential inauguration (but not later than April 1). The President is then required to submit to Congress a National Drug Control Strategy not later than the first Monday on February following the year in which the term of the President commences, and every two years thereafter. Certifying Agency Drug Control Budgets The Director of ONDCP is also responsible for developing a consolidated National Drug Control Program budget proposal for each fiscal year, which is designed to implement the National Drug Control Strategy and inform Congress and the public about total federal spending on drug control activities. As part of this effort, the Director of ONDCP is required to assess and certify National Drug Control Program agencies’ drug control budgets on an annual basis to determine if they are adequate to meet the goals and objectives of the National Drug Control Strategy. Figure 1 illustrates ONDCP’s budget certification process. ONDCP Did Not Fully Address Selected Statutory Requirements Related to the National Drug Control Strategy in 2017, 2018, or 2019 For 2017 and 2018, ONDCP Did Not Issue a National Drug Control Strategy ONDCP did not issue a National Drug Control Strategy for 2017 or 2018. Pursuant to the ONDCP Reauthorization Act of 2006, the Director of ONDCP was required to promulgate the National Drug Control Strategy annually and the President was to submit the National Drug Control Strategy to Congress by February 1 of each year. According to ONDCP officials, ONDCP did not issue a National Drug Control Strategy for these years because (1) ONDCP did not have a Senate-confirmed Director during those years; and (2) 2017 was the administration’s inaugural year, and previous administrations also did not issue a Strategy during their first years. By statute, in the absence of a Director, the Deputy Director of ONDCP is to perform the functions and duties of the Director temporarily in an acting capacity. ONDCP had officials serving as Acting Director beginning in January 2017. The current Director of ONDCP was appointed Deputy Director beginning in February 2018 and served as Acting Director from February 2018 until April 2018. As of April 2018, the current Director continued in his role as Deputy Director until he was confirmed by the Senate as Director of ONDCP in January 2019. The previous administration also did not issue a National Drug Control Strategy in its inaugural year—2009—but it did issue a National Drug Control Strategy in its second year, as shown in table 1. On January 31, 2019, ONDCP issued its National Drug Control Strategy for 2019, which we discuss in more detail later in the report. Without a National Drug Control Strategy, ONDCP Could Not Complete the Drug Control Budget Certification Process in Accordance with Statutory Requirements in 2017 and 2018 The ONDCP Reauthorization Act of 2006 required the Director of ONDCP to issue drug control funding guidance to the heads of departments and agencies with responsibilities under the National Drug Control Program by July 1 of each year. ONDCP is to issue funding guidance for agency budget proposals for the fiscal year two years in the future. For example, ONDCP was to issue funding guidance to agencies in 2017 for development of the 2019 budget, and issue funding guidance in 2018 for development of the 2020 budget. Such funding guidance was required to address funding priorities developed in the National Drug Control Strategy. National Drug Control Program agencies are to submit their budget requests to ONDCP in the summer of each year (before submission to the Office of Management and Budget) and in the fall of each year (at the same time as submission to the Office of Management and Budget).The Director of ONDCP then determines whether National Drug Control Program agencies’ summer budget requests are adequate to meet the goals of the National Drug Control Strategy and certifies whether fall budget submissions include the funding levels and initiatives identified during the summer budget review. Since ONDCP did not issue a Strategy in 2017 or 2018, ONDCP could not develop and issue funding guidance, nor could it review and certify budget requests and submissions of National Drug Control Program agencies, in accordance with the statutory requirement. ONDCP officials stated that—in lieu of a Strategy—they used other sources to formulate the administration’s priorities, which served as the basis for drug control funding guidance in 2017 and 2018. For example, for the development of the fiscal year 2019 drug control budget in calendar year 2017, ONDCP officials stated that they relied upon the following sources for drug policy guidance: Initial development of the President’s Initiative to Stop Opioid Abuse and Reduce Drug Supply and Demand; Draft recommendations from the President’s Commission on Combating Drug Addiction and the Opioid Crisis; policy statements made by the President as a candidate; and policy priorities identified in the fiscal year 2018 President’s Budget. Additionally, for the development of the fiscal year 2020 funding guidance in calendar year 2018, ONDCP officials stated that they relied upon the following sources for drug policy priorities: the interim and final Report of the President’s Commission on Combating Drug Addiction and the Opioid Crisis; the President’s Initiative to Stop Opioid Abuse and Reduce Drug Supply and Demand; the draft National Security Council Strategic Framework; and a draft 2018 National Drug Control Strategy that ONDCP officials told us they drafted but did not issue. These sources may have provided ONDCP officials with some information about policy priorities and actions. However, ONDCP officials stated they did not consider these documents to be the National Drug Control Strategy, and none of the sources fulfill the statutory requirements under the ONDCP Reauthorization Act of 2006, which require funding guidance to address priorities from the National Drug Control Strategy. ONDCP officials told us that they provided drug control funding guidance to the heads of departments and agencies with responsibilities under the National Drug Control Program in 2017 and 2018. As described by ONDCP officials, drug control funding guidance identifies key program goals and the programs and activities that require agency funding to achieve the objectives of the National Drug Control Strategy. ONDCP has since issued the 2019 National Drug Control Strategy which states that it establishes the administration’s drug control priorities. The Strategy also states that the priorities provide federal drug control departments and agencies strategic guidance for developing their own drug control plans and strategies, and that the Strategy is intended to ensure federal drug control budget dollars are allocated in a manner consistent with the administration’s priorities. ONDCP officials told us that the agency intends to issue the next National Drug Control Strategy in February 2020 in accordance with the SUPPORT Act. ONDCP Issued a 2019 National Drug Control Strategy that Addresses Some, But Not All, Selected Requirements The 2019 National Drug Control Strategy and companion documents include information to address some but not all selected requirements under the ONDCP Reauthorization Act of 2006. ONDCP issued multiple documents that together were intended to address the requirements for the National Drug Control Strategy. The first document, the 2019 National Drug Control Strategy, was issued January 31, 2019, with three companion documents issued later in April and May 2019. These companion documents were the 2019 Data Supplement, the 2019 Performance Reporting System, and the 2019 Budget and Performance Summary. In our March 2019 testimony, we reported that the first document—the National Drug Control Strategy, which was the only one of the four documents available at the time of our testimony—did not include certain information required under the ONDCP Reauthorization Act of 2006. These selected requirements included: annual quantifiable and measurable objectives and specific targets; a 5-year projection for program and budget priorities; specific drug trend assessments; and a description of a performance measurement system. Following our March 2019 testimony, we reviewed the three companion documents and found that while they provide some additional information to address these same selected requirements, they do not completely address the requirements. As stated earlier, we based our analysis of the 2019 National Drug Control Strategy and companion documents on the ONDCP Reauthorization Act of 2006, which was the applicable law at the time ONDCP began drafting the Strategy. Current law is reflected in the SUPPORT Act, which includes some of the same requirements from the ONDCP Reauthorization Act of 2006 and some new or different requirements. In the paragraphs below, we identify which selected requirements from the ONDCP Reauthorization Act of 2006 were retained under the SUPPORT Act, and therefore represent current law, and which selected requirements were not retained. For those selected requirements that were not retained, we identify comparable current requirements in the SUPPORT Act. Annual quantifiable and measurable objectives and specific targets. Pursuant to the ONDCP Reauthorization Act of 2006, the National Drug Control Strategy was required to include “annual quantifiable and measurable objectives and specific targets to accomplish long-term quantifiable goals that the Director determines may be achieved during each year beginning on the date on which the National Drug Control Strategy is submitted.” The SUPPORT Act retained this requirement. We testified in March 2019 that while the 2019 National Drug Control Strategy lists seven items it designates as measures of performance or effectiveness, the document did not indicate how these would be quantified or measured. The document also did not include targets to be achieved each year. Our subsequent analysis of the three companion documents showed that one additional document provided more information related to this requirement. The 2019 Performance Reporting System includes 9 goals and 17 quantifiable and measurable objectives with specific targets for certain years. Specifically, the goals and objectives identified in the 2019 Performance Reporting System included educating the public about the dangers of drug use; expanding access to evidence-based treatment; decreasing the over-prescribing of opioid medications; and reducing the availability of illicit drugs in the United States through reduced production, increased seizure trends, and increased prices and reduced drug purity, among other things. The document states that each goal “is accompanied by aggressive, but achievable, objectives with two- and five-year targets from a baseline of 2017.” However, the 2019 Strategy does not meet the statutory requirement because it does not have annual targets that may be achieved each year. Instead, the Performance Reporting System states that 16 of the 17 objectives in the Strategy have 2-year targets to be achieved in 2019, and 14 of the 17 objectives have 5-year targets to be achieved in 2022. The objectives do not include annual targets for the other intervening years— 2018, 2020, and 2021, as required. The Performance Reporting System states that while ONDCP assumes a linear progression from the baseline year—2017, in most cases—to the 2022 target, the trajectory may not actually be linear, “but rather it may occur at varying rates over the 5-year period due to multiple factors which influence the ability to achieve each of the stated goals and objectives.” In contrast, other information ONDCP provided to us stated that annual targets can readily be determined from the linear paths between the 2- and 5-year targets. Without identifying annual targets, the 2019 National Drug Control Strategy and companion documents do not meet the statutory requirement. Further, annual targets would better position ONDCP to monitor progress in intervening years and make any needed changes to achieve its goals and objectives. The SUPPORT Act continues to require ONDCP to establish annual quantifiable and measurable objectives and specific targets in future Strategy iterations. By taking steps to address this requirement ONDCP could further demonstrate whether it is making meaningful progress every year toward the targets it sets. A 5-year projection for program and budget priorities. Pursuant to the ONDCP Reauthorization Act of 2006, the National Drug Control Strategy was required to include “a 5-year projection for program and budget priorities.” The SUPPORT Act retained this requirement. As we testified in March 2019, the 2019 National Drug Control Strategy did not include this information. Our subsequent analysis of the three companion documents showed that one document—the 2019 Performance Reporting System—provided more information about ONDCP’s program priorities but not ONDCP’s budget priorities. Specifically, 14 of the 17 objectives ONDCP included in the 2019 Performance Reporting System contain various 5-year targets, such as to reduce the rates of illicit drug and opioid use among youth by 15 percent. According to ONDCP officials, the objectives and targets in the 2019 Performance Reporting System satisfy the requirement for 5-year program and budget priorities. However, the document does not include how these objectives and targets relate to 5-year budget priorities. The SUPPORT Act continues to require ONDCP to include a 5-year projection of program and budget priorities in future Strategy iterations. By taking steps to address this requirement, ONDCP and National Drug Control Program agencies will be better positioned to plan for the resources needed to achieve the efforts that will have the greatest impact. Specific drug trend assessments. Pursuant to the ONDCP Reauthorization Act of 2006, the National Drug Control Strategy was required to include assessments of the reduction of the consequences of illicit drug use and availability and the reduction of illicit drug availability. We testified in March 2019 that the 2019 National Drug Control Strategy did not include these assessments. Our subsequent analysis of the three companion documents showed that the 2019 Data Supplement provided more information to address the required assessments but did not address all of the requirements. For example, the assessment of the reduction of the consequences of illicit drug use and availability was to include, among other things, the annual national health care cost of illicit drug use. However, the most recent national health care cost data in the 2019 Data Supplement is from 2007, and ONDCP did not indicate in the supplement whether more recent data were available. In another example, the assessment of the reduction of illicit drug availability was to be measured by, among other things, the number of illicit drug manufacturing laboratories seized and destroyed and the number of hectares of marijuana, poppy, and coca cultivated and destroyed domestically and in other countries. The 2019 Data Supplement provided data for marijuana and poppy until 2016 and for the quantity of coca eradicated until 2015. The SUPPORT Act no longer requires these specific assessments. However, the SUPPORT Act does include a new requirement that the National Drug Control Strategy provide “ description of the current prevalence of illicit drug use in the United States, including both the availability of illicit drugs and the prevalence of substance use disorders.” The SUPPORT Act also contains a new requirement—which we describe later in this report—for ONDCP to describe how each comprehensive, research-based, long-range quantifiable goal in the National Drug Control Strategy was determined, including data, research, or other information used to inform the determination. We address ONDCP’s implementation of this new requirement under the SUPPORT Act later in the report. A description of a performance measurement system. Pursuant to the ONDCP Reauthorization Act of 2006, the National Drug Control Strategy was required to include a “description of a national drug control performance measurement system” that: develops 2-year and 5-year performance measures and targets; describes the sources of information and data that will be used for identifies major programs and activities of the National Drug Control Program agencies that support the goals and annual objectives of the National Drug Control Strategy; evaluates the contribution of demand reduction and supply reduction activities implemented by each National Drug Control Program agency in support of the Strategy; monitors consistency between the drug-related goals and objectives of the National Drug Control Program agencies and ensures that each agency’s goals and budgets support and are fully consistent with the National Drug Control Strategy, among others; and coordinates the development and implementation of national drug control data collection and reporting systems to support policy formulation and performance measurement, including certain assessments. We testified in March 2019 that the 2019 National Drug Control Strategy did not include a description of a performance measurement system pursuant to the ONDCP Reauthorization Act of 2006. Our subsequent analysis of the three companion documents showed that the 2019 Performance Reporting System provides information about some of the elements the performance measurement system is required to do. For example, the 2019 Performance Reporting System includes 2-year and 5- year targets for many of its objectives and describes some of the sources of data that will be used to measure each target. However, it does not include a description of the system that will accomplish each of the requirements in the ONDCP Reauthorization Act of 2006. For example, it does not describe a performance measurement system that identifies major programs and activities of the National Drug Control Program agencies that support the goals and annual objectives of the National Drug Control Strategy. Such programs and activities could indicate how ONDCP expects to achieve these objectives, such as how to educate the public about the dangers of drug use, or how to expand access to evidence-based treatment. Additionally, it does not describe how the performance measurement system monitors consistency between the drug-related goals and objectives of the National Drug Control Program agencies and ensures that each agency’s goals and budgets support and are fully consistent with the National Drug Control Strategy. ONDCP officials stated they believe the 2019 Performance Reporting System meets the statutory requirement for a description of a performance measurement system. The SUPPORT Act, as originally enacted in October 2018, no longer required a description of a performance measurement system. However, the ONDCP Technical Corrections Act of 2019, enacted in November 2019, amended the SUPPORT Act to reinstate the requirement for a description of a performance measurement system. Therefore, this requirement will apply to the 2020 National Drug Control Strategy and future Strategy iterations. ONDCP Has Met Some SUPPORT Act Requirements That GAO Reviewed but Its Approach to Meeting Others Does Not Incorporate Key Planning Elements ONDCP Has Addressed Requirements for New Coordinator Positions As of August 2019, ONDCP filled all five coordinator positions described in the SUPPORT Act, two of which are substantively new positions. Specifically, ONDCP officials stated that they have designated officials for the new positions of performance budget coordinator and emerging and continuing threats coordinator. By filling each of these positions, ONDCP is better positioned to fulfill the responsibilities for which each position is accountable, as described in figure 2 below. ONDCP’s Approach to Meeting Selected New Requirements for the National Drug Control Strategy and the Drug Control Data Dashboard Does Not Incorporate Key Planning Elements As of October 2019, ONDCP officials could not provide in writing or otherwise describe key planning elements to ensure ONDCP can meet selected new requirements in the SUPPORT Act related to the development of the 2020 and future National Drug Control Strategy iterations, and related to the development and implementation of the Drug Control Data Dashboard. Figure 3 outlines the selected requirements for the Strategy, which were effective upon enactment of the SUPPORT Act in October 2018. Each of the four selected SUPPORT Act requirements described in Figure 3 requires ONDCP to include specific information in the 2020 and future National Drug Control Strategy iterations. For example, for each comprehensive, research-based, long-range, quantifiable goal, the National Drug Control Strategy must contain (1) a description of how each goal will be achieved; (2) a performance evaluation plan for each goal; and (3) a description for how each goal was determined. The National Drug Control Strategy must also include a plan to expand treatment for substance use disorders. Officials from ONDCP and selected agencies told us that in spring 2019 ONDCP requested that the National Drug Control Program agencies determine how their existing programs and activities align with the 2019 National Drug Control Strategy, including the goals and objectives articulated in the 2019 Performance Reporting System. In October 2019, ONDCP officials told us that the 2020 Strategy would be issued in accordance with the SUPPORT Act, by the first Monday in February (February 3, 2020). ONDCP also provided us with two documents to describe its approach for meeting this deadline. One document includes a table that lists SUPPORT Act requirements along with the ONDCP component(s) responsible for implementation and the deadline. The other document provides a high-level summary of the National Drug Control Strategy development and interagency review process. For example, the plan to monitor progress on the drafting of components’ sections of the Strategy notes that it is to occur through “as- needed (but frequent)” meetings with the deputy chief of staff and the components and their heads. The extensive nature of the new SUPPORT Act requirements, as described above, indicates that significant implementation steps may be necessary, such as, a description of the specific steps necessary to accomplish this overarching task, identification of who will be responsible for each step, and a schedule of interim milestones. However, neither of these documents describes such critical implementation steps. Further, neither specifies what resources or processes, for example, would be needed and by what specific milestone date ONDCP would accomplish any particular step to complete the overall work in a timely manner. For example, the document that includes the table indicates that the deadline for all requirements related to the National Drug Control Strategy is February 2020. However, some requirements associated with the development of the Strategy, such as consultation requirements, would need to be completed before the Strategy’s due date—February 2020. According to Standards for Internal Control in the Federal Government under Internal Control Principle 6, to achieve an entity’s mission, management should define objectives in specific terms so they are understood at all levels of the entity. This involves clearly defining what it is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement—in other words, key planning elements. Standards for project management also state that managing a project involves developing a plan with specific actions and milestone dates. Defining these key planning elements will help provide assurance that ONDCP’s efforts will result in a National Drug Control Strategy—for 2020 and future years—that fully addresses the requirements of the SUPPORT Act. In addition, developing and documenting these planning elements would help ONDCP structure its planning efforts through consideration of resource investments, time frames, and any necessary processes, policies, roles, and responsibilities to address each requirement. Furthermore, implementing these planning elements will help ensure that ONDCP follows a routine planning process going forward, and that future iterations of the National Drug Control Strategy that ONDCP develops are consistent with the law. Additionally, as of December 2019, ONDCP has not documented key planning elements to ensure it will meet the SUPPORT Act’s requirements for the Drug Control Data Dashboard, to make timely information publicly available on the scope and complexity of drug use and drug control activities. The SUPPORT Act includes requirements for what data is to be included in the Drug Control Data Dashboard as well as its functionality, to ensure it is searchable and sortable. Figure 4 outlines the requirements for the Drug Control Data Dashboard, which were effective upon enactment of the SUPPORT Act in October 2018. In August 2019, ONDCP posted a public version of the Drug Control Data Dashboard that included information from the 2019 Data Supplement in spreadsheet format, but did not provide all of the data required by the SUPPORT Act. For example, the Drug Control Data Dashboard does not include required data on the extent of the unmet need for substance use disorder treatment. ONDCP officials shared information regarding potential data sources they may use to fulfill the additional required data elements. In addition, ONDCP officials told us that some data requirements listed in the statute do not exist at this time. For example, ONDCP officials stated that data do not exist regarding the known and estimated flow of substances into the United States for the current calendar year and each of the three previous years. ONDCP officials stated that there was more work necessary to ensure all the required data are incorporated into the Drug Control Data Dashboard. At that time, they also stated that they expected to address all required elements by the end of 2019. However, we found that they do not have key planning elements, such as a specific timeline with interim milestones or documented plans for when and how they would complete this work. ONDCP subsequently posted an updated version of the Drug Control Data Dashboard, which we reviewed in December 2019. While the updated Drug Control Data Dashboard identifies required data elements that are unavailable, ONDCP has not addressed how or when ONDCP planned to provide them, such as by identifying alternative data sources or identifying additional resources that may be necessary for enhanced data collection efforts. The SUPPORT Act also requires the Drug Control Data Dashboard to be machine-readable and searchable by year, agency, drug, and location, to the extent practicable. Officials stated in September 2019 they planned to add this functionality to the Drug Control Data Dashboard in the fall of 2019. In written comments on a draft of this report in December 2019, ONDCP indicated that the data have been posted in a machine-readable, sortable, and searchable format. However, as of December 2019, we found that the Drug Control Data Dashboard is still not fully searchable by year, agency, drug and location. We have previously reported on key practices for agencies to follow when reporting government data. These practices describe, for example, that agencies should ensure their website’s data search functions and overall interface is intuitive to users. While effective implementation of such functions can be a significant undertaking, ONDCP does not have plans to account for timing, content, functionality, or any additional resources required to fully implement this requirement. ONDCP officials stated in September 2019 they may need to consult Congress about additional resources to fulfill all of the requirements related to the Drug Control Data Dashboard, but stated that they do not have specific plans for what resources they may request. Internal control standards call for agencies to define key planning elements, including how a task will be accomplished and associated timeframes. Developing and documenting key planning elements— including resource investments, time frames, and any necessary processes, policies, roles, and responsibilities—will better position ONDCP to fully implement all of the law’s requirements for the Drug Control Data Dashboard. Once implemented, the Drug Control Data Dashboard will help enable ONDCP to capitalize on available data to better understand the scope and nature of the drug crisis. Conclusions ONDCP is responsible for leading the nation’s fight against a persistent drug epidemic that continues to devastate Americans’ lives. However, the 2019 National Drug Control Strategy does not fully comply with the law, and the agency has not developed key planning elements to help ensure it will meet its significant additional responsibilities under the SUPPORT Act. These responsibilities include issuing the National Drug Control Strategy in accordance with statutory requirements to help prioritize and measure key efforts to address the drug epidemic and creating a Drug Control Data Dashboard that contains timely information about the scope and complexity of the drug epidemic. These responsibilities also extend beyond the upcoming 2020 Strategy, with requirements to complete future Strategy iterations on a regular basis. Developing and documenting key planning elements, such as resource investments, time frames, and any necessary processes, policies, roles, and responsibilities—will help ONDCP structure its ongoing efforts. Implementing this approach will then better position ONDCP to meet statutory requirements for the next Strategy, due in February 2020, and satisfy all requirements related to the Drug Control Data Dashboard. Implementing this approach over time will also help ONDCP ensure it is meeting statutory requirements for future iterations of the National Drug Control Strategy. Recommendations for Executive Action We are making 4 recommendations to ONDCP. The Director of ONDCP should develop and document key planning elements to help the agency meet the SUPPORT Act requirements for the 2020 National Drug Control Strategy and future Strategy iterations. These planning elements should include descriptions of resource investments, time frames, and any processes, policies, roles, and responsibilities needed to address each requirement. (Recommendation 1) The Director of ONDCP should—after developing and documenting key planning elements to meet the SUPPORT Act requirements—routinely implement an approach, based on these planning elements, to meet the requirements for the 2020 National Drug Control Strategy and future Strategy iterations. (Recommendation 2) The Director of ONDCP should develop and document key planning elements to help the agency meet the SUPPORT Act requirements to establish a Drug Control Data Dashboard that would include descriptions of resource investments, time frames, and any processes, policies, or roles, and responsibilities needed to address this requirement. (Recommendation 3) The Director of ONDCP should—after developing and documenting key planning elements—implement an approach, based on these planning elements, to meet the SUPPORT Act requirements to establish a Drug Control Data Dashboard. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to ONDCP, DHS, DOJ, and HHS. ONDCP provided written comments, which are summarized below and reproduced in appendix I. ONDCP, DHS, and DOJ also provided technical comments, which we incorporated, as appropriate. In an email, an HHS official stated that HHS did not have any comments on the report. In its written comments, ONDCP stated that it accepted the first two recommendations regarding the need for a robust internal planning process for National Drug Control Strategies. Specifically, the first recommendation is for ONDCP to develop and document key planning elements to help the agency meet the SUPPORT Act requirements for the 2020 National Drug Control Strategy and future Strategy iterations. The second recommendation is for ONDCP to routinely implement an approach to meet these requirements. In particular, ONDCP agreed to implement key planning elements for future Strategies that will include detailed descriptions of planned steps, identifying which ONDCP component will be responsible for each step, resource investments, interim milestones, and overall time frames. If implemented as planned, these actions would address the intent of these recommendations. Regarding the third and fourth recommendations related to the Drug Control Data Dashboard, ONDCP noted that these recommendations have been rendered moot because the agency has already fully complied with posting the Data Dashboard to its website. ONDCP also stated that it has posted to the Data Dashboard all of the drug-related data required by ONDCP’s statute that currently exists. Further, ONDCP stated that the data has been posted in machine-readable, sortable, and searchable format as required and it will be updated on a continuous basis throughout the year as new data become available. While ONDCP has included additional information on the Dashboard, the two recommendations are to develop, document, and implement key planning elements for the Dashboard to fully meet the law’s requirements, which ONDCP has not yet done. For example, ONDCP identifies in the Dashboard which of the required data elements are unavailable, such as required data on the extent of the unmet need for substance use disorder treatment. However, as stated in the report, ONDCP has not documented key planning elements for how it will address these missing data. Such planning elements could include approaches for collecting the missing data, such as articulating a plan to work with Congress to identify alternative data sources or to identify additional resources that may be necessary for enhanced data collection efforts. Furthermore, ONDCP has not developed or implemented key planning elements to ensure the Drug Control Data Dashboard has the search features noted in the statute. In its current format, the Dashboard is not fully searchable by year, agency, drug, and location. While the statute indicates that search features should have been implemented “to the extent practicable,” ONDCP did not explain why it was not practical to implement them. Therefore, we continue to believe that developing, documenting, and implementing key planning elements for the Dashboard to fully meet the law’s requirements will help enable ONDCP to capitalize on available data to better understand the scope and nature of the drug crisis. ONDCP also noted several points related to our specific findings, as discussed below. First, ONDCP noted that it did issue robust drug budget guidance to National Drug Control Program agencies during 2017 and 2018. The report acknowledges that ONDCP provided this guidance. However, as explained in the report, the guidance is statutorily required to address funding priorities developed in the National Drug Control Strategy. Since ONDCP did not issue a Strategy in 2017 or 2018, it could not meet this statutory requirement. In addition, ONDCP stated that it maintains that the 2019 National Drug Control Strategy met all statutory requirements, and therefore does not agree with our analysis of its adherence to those requirements. ONDCP also noted that the four requirements we assessed constitute only a small portion of the many requirements for the 2019 National Drug Control Strategy and that the report gives the misleading impression that ONDCP did not comply with some significant number of requirements. We recognize that there are a number of requirements for the Strategy; however, as stated in the report, our review focused on these four provisions because we determined them to be significant to ONDCP’s role in setting a strategic direction to oversee and coordinate national drug control policy, and because they are critical to ensuring a framework for measuring results. Specifically, the provisions related to including information in the Strategy related to annual quantifiable and measurable objectives and specific targets, a 5-year projection for program and budget priorities, specific drug trend assessments, and a description of a performance measurement system. As detailed in the report, we found that the 2019 Strategy addressed some—but not all—of these four statutory requirements. For example, we found that the Strategy did not include a 5-year projection for budget priorities and included only some information related to specific drug trend assessments. In its written comments, ONDCP provided additional explanation for why it did not agree with our characterization of the requirements. For example, ONDCP stated that it is not able to provide quantitative fiscal year projections for future years because this would go against long-standing Office of Management and Budget policy. Related to drug trend assessments, ONDCP noted that it reports data generated by other government agencies, and that policy research funding for ONDCP has not been appropriated since fiscal year 2011. We made recommendations, which ONDCP agreed to implement, focused on developing and implementing key planning elements such as descriptions of resource investments; timeframes; and processes, policies, and responsibilities needed to address each requirement. Implementing these planning elements could, for example, help ensure that ONDCP addresses any policy considerations or additional resources needed to help ensure that future iterations of the Strategy fully meet all statutory requirements. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of National Drug Control Policy, the Secretary of the Department of Health and Human Services, the Acting Secretary of the Department of Homeland Security, the Attorney General, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff has any questions concerning this testimony, please contact Triana McNeil at (202) 512-8777 or McNeilT@gao.gov or Mary Denigan-Macauley at (202) 512-7114 or McNeilT@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Office of National Drug Control Policy Appendix II: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Joy Booth (Assistant Director), Will Simerl (Assistant Director), Michelle Loutoo Wilson (Analyst-in- Charge), Billy Commons, Wendy Dye, Jane Eyre, Kaitlin Farquharson, Susan Hsu Michael, Amanda Miller, and Jan Montgomery made key contributions to this report.
Why GAO Did This Study Almost 70,000 people died from drug overdoses in 2018, according to the latest Centers for Disease Control and Prevention data. The 2018 SUPPORT Act reauthorized ONDCP and imposed new requirements. GAO noted in its March 2019 High Risk report that the federal effort to prevent drug misuse is an emerging issue requiring close attention. Pursuant to 21 U.S.C. § 1708a(b), GAO has periodically assessed ONDCP's programs and operations. This report assesses the extent to which ONDCP (1) met selected statutory requirements related to the National Drug Control Strategy in 2017, 2018, and 2019, and (2) has planned or implemented actions to meet selected new requirements in the SUPPORT Act. GAO assessed the 2019 Strategy and companion documents against four key statutory requirements that were consistent with or similar to ONDCP's ongoing responsibilities under the SUPPORT Act. GAO also assessed ONDCP's progress in addressing seven new SUPPORT Act requirements, and interviewed ONDCP officials. What GAO Found The Office of National Drug Control Policy (ONDCP) is responsible for overseeing and coordinating the development and implementation of U.S. drug control policy across the federal government. However, ONDCP did not issue a National Drug Control Strategy for either 2017 or 2018, as required by statute. ONDCP was also required to assess and certify federal agencies' drug control budgets to determine if they were adequate to meet Strategy goals and objectives. Without a Strategy in 2017 and 2018, ONDCP could not complete this process according to statutory requirements. ONDCP issued a 2019 Strategy and companion documents that addressed some but not all of the selected statutory requirements GAO reviewed. For example, the Strategy and companion documents did not include the required 5-year projection for budget priorities. The October 2018 Substance Use-Disorder Prevention that Promotes Opioid Recovery and Treatment for Patients and Communities Act (SUPPORT Act) retained some requirements and introduced new ones for ONDCP. ONDCP met some SUPPORT Act requirements GAO reviewed. For example, ONDCP filled all five coordinator positions described in the SUPPORT Act. However, its approach to meeting other requirements does not incorporate key planning elements. For example, the SUPPORT Act requires that future iterations of the Strategy include a description of how each goal will be achieved, performance evaluation plans, and a plan for expanding treatment of substance use disorders. ONDCP could not provide in writing or otherwise describe its planned steps, interim milestones, resource investments, or overall timeframes—all key planning elements—that would provide assurance it can meet these requirements by the deadline for the next Strategy—February 2020. The SUPPORT Act also required ONDCP to publish an online searchable Data Dashboard of drug control data, with information including quantities of drugs and frequency of their use. While ONDCP published (and later updated) a public version of this resource on its website, as of December 2019, it was not complete (e.g., lacked required data on the unmet need for substance use disorder treatment). Further, ONDCP officials had no information on next steps for fully meeting the requirements. Developing, documenting, and implementing key planning elements to meet these requirements—including resource investments, time frames, and any processes, policies, roles, and responsibilities—would be consistent with key principles for achieving an entity's objective and standards for project management. Importantly, doing so would help ONDCP structure its planning efforts and comply with the law. What GAO Recommends GAO is making 4 recommendations to ONDCP to develop, document, and implement key planning elements to meet certain requirements in the SUPPORT Act. ONDCP agreed to implement 2 recommendations related to the Strategy, but disagreed with 2 related to the Drug Control Data Dashboard, noting that recent updates satisfy the law. GAO maintains that they do not fully do so, and that implementing key planning elements would help address the law, as discussed in the report.
gao_GAO-20-408
gao_GAO-20-408_0
Background Medicare beneficiaries with behavioral health conditions have a diverse range of conditions, of different severity, requiring different types of care. Beneficiaries with mild behavioral health conditions—such as mild depression—may require less complex care than beneficiaries with serious behavioral health conditions—such as schizophrenia—or with multiple interacting behavioral or physical health conditions. Subpopulations of Medicare beneficiaries also may face different behavioral health challenges. For example, dual-eligible beneficiaries— individuals eligible for both Medicare and Medicaid—are three times more likely to have been diagnosed with a major psychiatric disorder than non- dual beneficiaries. Medicare Services and Providers Medicare covers services for the diagnosis and treatment of behavioral health conditions, which includes the inpatient care covered by Part A and the physician services and outpatient care covered by Part B. Key behavioral health services in Medicare Part B include visits with a physician or other covered provider, partial hospitalization program services, annual depression screening, alcohol misuse screening and counseling, psychotherapy, screening, brief intervention, and referral to treatment services, and behavioral health integration services. Dual-eligible beneficiaries may be able to access additional behavioral health services through Medicaid that are not available through Medicare. Medicare covers behavioral health services delivered by a range of providers, including psychiatrists and physicians, clinical psychologists, licensed clinical social workers (LCSW), nurse practitioners, physician assistants, and clinical nurse specialists. In order to bill for services provided to Medicare beneficiaries, providers must enroll with CMS. Providers who do not want to enroll in the Medicare program may “opt out” of Medicare. Behavioral health providers have among the highest opt-out rates, with over 7,000 psychiatrists, psychologists, and LCSWs opting out of Medicare, representing nearly one-third of all providers who opted out of Medicare in 2017. Beneficiaries may still see these providers but must enter into a private contract with them. Medicare will not pay for any services furnished by providers who have opted out, so in these cases, beneficiaries must pay the provider’s entire charge out of pocket. According to researchers, psychiatrists have low participation rates across all forms of insurance, including Medicare, which may be explained, in part, by the reimbursement rates for time intensive treatments, low supply and high demand for psychiatry services, and high administrative burdens for solo practitioners to participate in insurance programs. Provision of Information to Medicare Beneficiaries CMS is required by law to provide information annually to Medicare beneficiaries about their coverage, including benefits and limitations on payment. Various factors affect how beneficiaries receive and process information about behavioral health conditions and their coverage options for behavioral health services. According to HHS, low health literacy is a key barrier that impacts individuals’ ability to comprehend health-related information. Moreover, researchers have found that low health literacy is associated with poor physical and mental health. More specific challenges facing individuals with behavioral health conditions include the stigma surrounding behavioral health conditions that may discourage individuals from seeking help or treatment. According to advocates for Medicare beneficiaries and individuals with behavioral health conditions, some individuals may have caregivers or other support for finding information and engaging in decision-making about their behavioral health care. Medicare Advantage Plans According to CMS, one-third (36 percent) of Medicare beneficiaries in 2019 were enrolled in MA plans, which CMS pays on a monthly capitated basis to deliver all covered services needed by an enrollee. MA plans contract with provider networks to deliver care to Medicare beneficiaries and must meet CMS’s network adequacy standards. MA plans may employ care management and utilization management strategies. Care management may include case managers or care coordinators who work with enrollees and providers to manage the care of complex or high-risk enrollees, including those with behavioral health conditions. According to the MA plan officials we interviewed, prior authorization—a utilization management strategy—may be employed for high-cost treatments. Officials from all five MA plans told us that they may have difficultly recruiting behavioral health providers to participate in their network. One study found access to psychiatrists to be more limited than any other physician specialty in MA plan networks, with 23 percent of psychiatrists in a county included in network on average, compared to 46 percent of physicians in a county across all physician specialties in 2015. Nearly One in Seven Medicare Beneficiaries Used Behavioral Health Services in 2018; Most Services Were Provided by Psychiatrists, Social Workers, and Psychologists Fourteen Percent of Medicare Beneficiaries Used Behavioral Health Services in 2018, Totaling More than $3 Billion in Spending Our analysis of Medicare claims data shows that in 2018 approximately 5 million beneficiaries used behavioral health services through Medicare Part B. This represented about 14 percent of the more than 36 million fee- for-service (traditional) Medicare beneficiaries, and CMS paid providers about $3.3 billion for approximately 39.3 million behavioral health services in 2018. (See fig. 1.) Our analysis of claims data also shows that among Medicare beneficiaries who used behavioral health services in 2018, utilization of the services varied significantly. (See fig. 2.) The average number of services used by Medicare beneficiaries who used behavioral health services in 2018 was eight, while the median was three. Nearly half of all such beneficiaries used between two and seven behavioral health services during the year. Nearly one-third (30 percent) of beneficiaries using behavioral health services used one behavioral health service during the year. The 11 percent of beneficiaries who were the highest behavioral health service users used 19 or more behavioral health services (the 90th percentile) during 2018, and accounted for about half of all Medicare expenditures on behavioral health services. Our analysis also found that the services beneficiaries received largely fell into two broad categories in 2018: general patient consultations (53 percent of services) and psychiatry services, including psychotherapy (43 percent of services). Other services, such as central nervous system assessments and drugs administered by providers, accounted for about 5 percent of services. Beneficiaries receiving behavioral health care were largely diagnosed with a condition in at least one of five diagnostic behavioral health conditions categories, each of which contain multiple specific diagnoses. In 2018, 96 percent of all behavioral health services were for a primary diagnosis within one of these five categories. For example, the mood disorder category, which includes diagnoses such as depression and bipolar disorder, accounted for 42 percent of services provided. (See fig. 3.) Medicare claims data for 2018 show that some Medicare beneficiaries used behavioral health services to obtain treatment for SUDs. Seven percent of the behavioral health services in 2018 were for SUDs. Moreover, Medicare beneficiaries with SUDs represented 11 percent of beneficiaries using behavioral health services. On average, Medicare beneficiaries with SUDs used five behavioral health services in 2018, which is less than the number of behavioral health services used on average by all beneficiaries with a behavioral health diagnosis. Overall, beneficiaries under age 65 and dual-eligible beneficiaries were disproportionately represented among users of behavioral health services compared to the Medicare population. (See fig. 4.) In 2018, while beneficiaries under age 65 constituted 16 percent of all Medicare beneficiaries, they represented 34 percent of the Medicare beneficiaries who used behavioral health services and accounted for 42 percent of all behavioral health services paid for by Medicare that year. Similarly, while dual-eligible beneficiaries, many of whom are under age 65, constituted 20 percent of all Medicare beneficiaries, they represented 39 percent of the Medicare beneficiaries who used behavioral health services in 2018 and accounted for 45 percent of all behavioral health services paid for by Medicare. Finally, women constituted about 55 percent of all Medicare beneficiaries in 2018 and represented 62 percent of the beneficiaries who used behavioral health services that year. Two-Thirds of Behavioral Health Services Were Provided by Psychiatrists, Licensed Clinical Social Workers, and Psychologists in 2018 Our analysis of Medicare Part B claims shows that in 2018 two-thirds of behavioral health services (67 percent) were delivered to Medicare beneficiaries by behavioral health specialists: psychiatrists, psychologists, and licensed clinical social workers (LCSW). (See fig. 5.) Psychiatrists provided the most behavioral health services (31 percent), followed by LCSWs (19 percent), and psychologists (17 percent). A range of other providers delivered the remaining one-third of behavioral health services, including advanced practice providers (16 percent), primary care physicians (11 percent), other physicians (5 percent), and other providers (1 percent). As figure 5 shows, beneficiaries who were relatively high users of behavioral health services received a greater share of services from behavioral health specialists compared to all Medicare beneficiaries who used behavioral health services. Approximately three-quarters of services (78 percent) provided to the highest service users (those in the 90th percentile with 19 or more services per year) were delivered by behavioral health specialists: psychiatrists (31 percent), LCSWs (25 percent), and psychologists (22 percent). However, this pattern did not hold for Medicare beneficiaries with SUDs. Our analysis showed that beneficiaries with SUDs received 20 percent of their behavioral health services from a behavioral health specialist, and the other 80 percent of services were delivered by providers who did not specialize in behavioral health. See appendix I for additional information on Medicare behavioral health utilization. CMS Uses Various Approaches to Provide Coverage Information to Beneficiaries, but Annual Mailing Does Not Include Explicit Information on SUD Treatment Coverage CMS Uses Various Communication Approaches to Provide Information to Medicare Beneficiaries on Coverage for Behavioral Health Services According to CMS officials, the agency’s overall strategy for providing information to beneficiaries about coverage of behavioral health services involves a variety of communication and outreach approaches. For example, CMS disseminates information to beneficiaries through written and online publications, Medicare.gov, scripted answers to questions through 1-800-MEDICARE, and social media. CMS is required by law to provide information to beneficiaries about coverage under Medicare. CMS annually mails out the Medicare & You handbook to all beneficiaries, and according to CMS officials, it mailed the handbook to 42.6 million households in 2018. The information provided in the publication includes descriptions of benefits and services, a summary of cost sharing, and the types of providers Medicare covers. According to CMS officials, Medicare.gov also includes information about covered benefits and a provider directory, although some may not be accepting new Medicare patients. According to CMS officials, the most comprehensive source of information on coverage for behavioral health services is contained in the publication Medicare & Your Mental Health Benefits, which is also available at Medicare.gov. We obtained statistics from CMS officials on the frequency with which Medicare beneficiaries requested copies of Medicare & You or the agency’s other publications or accessed the agency’s web-based tools to obtain information on Medicare coverage, including coverage for behavioral health services. The most frequently accessed in 2018 were Medicare & You, Medicare.gov, and the “What’s Covered?” smartphone application. (See table 1.) These sources of information cover Medicare more broadly and provide information about Medicare benefits in general, rather than those dealing specifically with behavioral health. Like CMS, MA plans use different approaches to provide information to the beneficiaries enrolled in MA plans, including publications, phone calls, and websites. According to officials from the five MA plans in our review, MA plans use multiple modes of communication to meet the preferences of their enrolled populations. MA plans are required to provide information to each enrollee at the time of enrollment and annually thereafter; for example, MA plans must share information about providers reasonably available to enrollees. MA plans are also required to provide marketing materials to CMS for review to ensure the adequacy and accuracy of the information in the materials. Two of the MA plans in our review offer digital health tools to their enrollees. One plan offers a tool that allows enrollees to communicate with case managers, and another plan provides enrollees access to test results, the ability to refill prescriptions and schedule appointments, as well as resources for patient education. According to CMS officials, the agency also uses other strategies for providing information to beneficiaries about coverage of behavioral health benefits. CMS officials stated that it partners with stakeholders to assist beneficiaries and caregivers seeking help with behavioral health conditions. For example, CMS officials described webinars and workshops it conducts to educate partners and stakeholders who educate and counsel Medicare beneficiaries. According to agency officials, the webinars cover a range of topics related to Medicare benefits and coverage, including behavioral health. The officials also told us that CMS partners with state health insurance programs to provide information about Medicare, including information to help Medicare beneficiaries understand their coverage. CMS officials also stated that the agency conducts public awareness and outreach campaigns to provide information to beneficiaries. CMS’s Annual Mailing to Beneficiaries Does Not Include Explicit Information on Medicare Coverage for SUD Treatment Medicare & You—the most widely disseminated source of information on Medicare benefits and coverage—does not provide explicit information about coverage of services for beneficiaries with SUDs, although HHS and CMS have identified addressing SUDs as a top priority. We reviewed the fall 2019 edition of the Medicare & You publication and found that, while it does provide information on Medicare coverage for behavioral health services, it does not contain an explicit description of the services that may be covered for treatment of SUDs. CMS officials noted that printing the almost 43 million hard copies of the fall 2019 edition of Medicare & You started in July 2019, several months before the rule implementing expanded OUD coverage under Medicare was finalized. In December 2019, CMS officials updated the 2020 edition of Medicare & You to include information on the expanded OUD treatment authorized by the SUPPORT Act, which were finalized in November 2019, and became effective in January 2020. According to CMS officials, as of December 2019, this updated version was available on Medicare.gov, and CMS officials told us it will be sent to all individuals who become eligible for Medicare throughout calendar year 2020. We reviewed the updated 2020 web version of Medicare & You and found that a reference to opioid treatment was included; however, explicit information about Medicare’s coverage for other SUDs was not added. Although information on Medicare’s coverage for treating OUD is important, OUD represents only a subset of the SUDs for which Medicare beneficiaries may need treatment. Further, several of the advocates we interviewed noted that Medicare beneficiaries would benefit from clearer and more specific information about SUD coverage. According to data from SAMSHA, about one in 10 SUD cases is related to an OUD, while the rest are related to non-opioid substances. We asked CMS officials why the additions to Medicare & You relate only to OUDs, and they explained that it is the only distinct Medicare benefit category for substance abuse treatment. Officials also stated that while there is no category in Medicare & You for other SUDs specifically, the publication does note some related benefits, such as for counseling and services for behavioral issues, alcohol misuse screening, and behavioral health integrative services. However, the alcohol misuse screening benefit is specifically for beneficiaries who do not meet the criteria for alcohol dependency and covers brief counseling in a primary care setting. The description of behavioral health does not specify that SUDs are a behavioral health condition. The absence of information on Medicare’s coverage for SUDs in Medicare & You is inconsistent with HHS and CMS strategic priorities related to treatment for SUDs. The Department of Health and Human Services’ Fiscal Year 2018-2022 Strategic Plan includes among its strategic objectives reducing the impact of SUDs through treatment. Additionally, CMS has made addressing SUDs a top priority and has a stated commitment to treat SUDs, including OUDs. Beneficiaries lacking information on coverage of SUDs may be less likely to seek treatment. Conclusions HHS and CMS have made addressing SUDs a priority. However, in its most widely disseminated publication on Medicare coverage and benefits, CMS does not provide explicit information about the program’s coverage for SUD treatment services. As a result, beneficiaries with SUDs may not be aware of this coverage and may not seek needed treatment. Recommendation for Executive Action We are making the following recommendation to CMS: The Administrator of CMS should ensure that the Medicare & You publication includes explicit information on the services covered by the Medicare program for beneficiaries with a SUD. (Recommendation 1) Agency Comments and our Evaluation We provided a draft of this report to HHS for review. HHS concurred with our recommendation, and provided written comments that are reproduced in app. II, and technical comments, which we have incorporated as appropriate. In its written comments, HHS stated it would explore opportunities to modify the Medicare & You handbook to ensure beneficiaries with substance use disorders are aware of the services covered by Medicare. HHS also reiterated some of the situations under which substance use disorder may be covered under Medicare, as well as its communication strategies and tools to ensure that beneficiaries and providers are aware of all of the services available under Medicare. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at CosgroveJ@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Appendix I: Additional Tables on Behavioral Health Utilization among Medicare Beneficiaries, 2018 To produce the tables below describing the utilization of behavioral health services by Medicare beneficiaries and the providers furnishing these services, we analyzed the 2018 Medicare Part B claims file, the most recent data available at the time of analysis. Our analysis only includes Medicare beneficiaries in fee-for-service Medicare because similar reliable information was not available for beneficiaries enrolled in Medicare Advantage. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, Lori Achman (Assistant Director), N. Rotimi Adebonojo (Analyst in Charge), Todd Anderson, Sauravi Chakrabarty, Kelly Krinn, Rich Lipinski, Drew Long, Diona Martyn, Vikki Porter, and Caitlin Scoville made key contributions to this report.
Why GAO Did This Study Behavioral health disorders often go untreated, potentially leading to negative health consequences. Behavioral health disorders include substance use or mental health disorders. Medicare provides coverage for behavioral health services. The Substance Use-Disorder Prevention that Promotes Opioid Recovery and Treatment for Patients and Communities Act enacted in 2018 included a provision for GAO to examine Medicare behavioral health services and how beneficiaries are informed of coverage and treatment options. This report (1) describes the utilization of behavioral health services by Medicare beneficiaries and the types of providers furnishing these services, and (2) examines how CMS provides information to beneficiaries about their coverage for behavioral health services. To describe service utilization and provider types, GAO analyzed 2018 Medicare claims data, the most recent data available. To examine how CMS shares information with beneficiaries, GAO reviewed CMS requirements for providing coverage information to beneficiaries, reviewed CMS publications, and interviewed CMS officials. What GAO Found GAO's analysis of Medicare claims data shows that in 2018 almost 5 million beneficiaries used behavioral health services—services for mental and substance use disorders. This represented about 14 percent of the more than 36 million fee-for-service (traditional) Medicare beneficiaries and reflects about $3.3 billion in spending. Additionally, about 96 percent of all behavioral health services accessed by Medicare beneficiaries in 2018, the latest data available, were for a primary diagnosis in one of five behavioral health disorder categories. (See figure.) Mood disorders, such as depression and bipolar disorders, accounted for 42 percent of services. SUD services accounted for about 7 percent of all services accessed by beneficiaries. Further, two-thirds of behavioral health services were provided by psychiatrists, licensed clinical social workers, and psychologists in 2018. The Centers for Medicare & Medicaid Services (CMS), the Department of Health and Human Services' (HHS) agency that administers Medicare, uses various approaches to disseminate information to Medicare beneficiaries about coverage for behavioral health services. As part of these efforts, CMS mails out Medicare & You —the most widely disseminated source of information on Medicare benefits—to all Medicare beneficiaries every year. GAO reviewed the fall 2019 and January 2020 editions of Medicare & You. While the January 2020 edition describes a new coverage benefit for beneficiaries with opioid use disorders, neither edition includes an explicit and broader description of the covered services available for substance use disorders. Both HHS and CMS have stated that addressing substance use disorders is a top priority. Given that coverage for substance use disorders is not explicitly outlined in Medicare's most widely disseminated communication, Medicare beneficiaries may be unaware of this coverage and may not seek needed treatment as a result.
gao_GAO-19-419T
gao_GAO-19-419T_0
CBP Has Taken Steps to Improve Its Recruting and Hiring Process, but the Process Remains Lengthy CBP Has Enhanced Its Recruitment Efforts and Applications for Law Enforcement Officer Positions Have Increased We reported in June 2018 that CBP increased its emphasis on recruitment by establishing a central recruitment office and increasing its participation in recruitment events. Specifically, CBP’s recruitment budget allocated by the centralized recruting office almost doubled, from approximately $6.4 million in fiscal year 2015 to more than $12.7 million in fiscal year 2017. CBP also more than tripled the total number of recruitment events it participated in, from 905 events in fiscal year 2015 to roughly 3,000 in both fiscal years 2016 and 2017. In addition, we reported that CBP had increased its use of recruitment incentives for OFO specifically from fiscal years 2015 through 2017 to help staff hard-to-fill locations. A recruitment incentive may be paid to a newly-appointed employee if an agency determines that a position is likely to be difficult to fill in the absence of such an incentive. From fiscal years 2015 through 2017, OFO increased the number of recruitment incentives it paid to CBP officers from nine incentives in two locations at a total cost of about $77,600 to 446 incentives across 18 locations at a cost of approximately $4.3 million. AMO and Border Patrol did not use recruitment incentives from fiscal years 2015 through 2017. As a result of its efforts, CBP also experienced an increase in the number of applications it received for law enforcement officer positions across all three operational components from fiscal years 2013 through 2017. For example, with the exception of fiscal year 2014, applications for Border Patrol agent positions increased every year, from roughly 27,000 applications in fiscal year 2013 to more than 91,000 applications in fiscal year 2017. Further, during the same period, applications for CBP officer positions increased from approximately 22,500 to more than 85,000, and applications for AMO’s law enforcement officer positions increased from about 2,000 to more than 5,800. CBP’s Hiring Process Has Improved, but the Process Remains Lengthy As we reported in June 2018, CBP’s law enforcement applicants undergo a lengthy and rigorous hiring process that includes nearly a dozen steps, including a background investigation, medical examination, physical fitness test, and polygraph examination. Several of these steps can be done concurrently—for example, CBP can begin the background investigation while the candidate completes the physical fitness test and medical examination process steps. Figure 1 depicts the hiring process for Border Patrol agent and CBP officer positions. From fiscal years 2015 through 2017, CBP generally improved its performance in two key metrics to assess the efficiency and effectiveness of its hiring process for law enforcement officer positions. Specifically, CBP reduced its time-to-hire (the average number of days that elapsed between the closing date of a job announcement and an applicant’s entry- on-duty date) and increased the percentage of applicants that are hired. With regard to the time-to-hire metric, as shown in table 1, CBP’s time-to- hire decreased from fiscal years 2015 through 2017. With regard to the percentage of applicants that are hired, CBP’s overall applicant pass rate metric calculates the estimated percentage of applicants who successfully complete the hiring process and enter on duty. CBP data indicate that overall applicant pass rates more than doubled for CBP officer and Border Patrol agent positions from fiscal years 2016 through 2017. CBP officials told us that higher overall applicant pass rates paired with recent increases in the number of applications received by the agency are starting to result in an increase in the number of law enforcement officers hired, as applicants complete CBP’s hiring process and officially enter on duty. As we reported in June 2018, CBP data indicated that more law enforcement officers entered on duty in the first half of fiscal year 2018 than entered on duty in the first half of fiscal year 2017. Specifically, the total number of CBP officers and Border Patrol agents that entered on duty in the first half of fiscal year 2018 increased by roughly 50 percent and 83 percent, respectively, when compared to the same period of the prior fiscal year. Further, the total number of AMO law enforcement officers that entered on duty in the first half of fiscal year 2018 more than doubled from the same period of fiscal year 2017. As we reported in June 2018, CBP has made efforts to improve its hiring process by revising certain aspects of the process, among other things. According to agency officials, these efforts to streamline and improve CBP’s overall hiring process have collectively resulted in the decreased time-to-hire and increased overall applicant pass rates discussed above. For example, in March 2017, CBP was granted the authority to waive the polygraph examination for veterans who meet certain criteria, including those who hold a current, active Top-Secret/Sensitive-Compartmented- Information clearance. Also, in April 2017, CBP received approval from the Office of Personnel Management to use direct-hire authority for law enforcement positions, which allows CBP to expedite the typical hiring process by eliminating competitive rating and ranking procedures and veterans’ preference. As of March 31, 2018, 77 CBP officers and 107 Border Patrol agents had entered on duty through this authority. CBP has also made revisions to specific steps in its hiring process, including the application, entrance examination, and polygraph examination, among others. For example, in fiscal year 2016, CBP reordered its hiring process to place the entrance examination as the first step directly after an applicant submitted an application. Prior to this change, CBP conducted qualification reviews on applicants to ensure they met position requirements before inviting them to take the entrance exam. According to CBP officials, this updated process provided applicants with the opportunity to obtain a realistic preview of the job they were applying for earlier in the hiring process. These officials explained that this helps to ensure that only those applicants who are committed to completing the hiring process and entering on duty at CBP continue through the hiring pipeline, which may help to address high applicant discontinue rates (e.g., roughly half of all eligible applicants in fiscal year 2015 did not take the exam). According to CBP officials, this revision also created efficiencies as the agency no longer has to spend time and resources on completing qualification reviews for applicants who either did not show up to take the exam or failed the exam itself. CBP has also made several changes to its polygraph examination process step, which has consistently had the lowest pass rate of any step in its hiring process. For example, among other things, CBP has increased the number of polygraph examiners available to administer the test, according to agency officials, and was piloting a new type of polygraph exam. According to CBP officials, the new examination focuses on identifying serious crimes and is sufficiently rigorous to ensure that only qualified applicants are able to pass. Preliminary data from CBP’s pilot show that this new exam has demonstrated higher pass rates when compared with CBP’s traditional polygraph exam while also taking less time, on average, per test to complete. At the time of our review, it was too early to tell if these efforts will result in improvements to the polygraph examination step. Available CBP data indicate mixed results. Specifically, while the average duration to complete this step decreased for all law enforcement officer positions from fiscal years 2015 through 2017, pass rates also declined slightly over this same period. For example, for Border Patrol agents, the pass rate declined from 28 to 26 percent, while for CBP officers, it declined from 32 to 25 percent. While CBP had reduced its time-to-hire and made efforts to improve its hiring process for law enforcement officers, CBP officials noted that the hiring process remained lengthy, which directly affected the agency’s ability to recruit and hire for law enforcement positions. CBP officials also stated that their ability to further improve CBP’s time-to-hire and increase law enforcement hires was affected by hiring process steps that can be challenging and time-consuming for applicants to complete, as well as CBP’s reliance on applicants to promptly complete certain aspects of the process. In fiscal year 2017, it took an average of 274 days for Border Patrol agent applicants and 318 days for CBP officer applicants to complete all hiring steps and enter on duty. According to a leading practice in hiring we identified for such positions, agencies should ensure that the hiring process is not protracted or onerous for applicants. According to CBP officials, the agency’s multi-step hiring process for its law enforcement officer positions was intentionally rigorous and involves extensive applicant screening to ensure that only qualified candidates meet the technical, physical, and suitability requirements for employment at CBP. Even so, CBP officials across several components told us that the agency’s time-to-hire was too long and directly affected the component’s ability to recruit and hire for law enforcement positions. For example, OFO officials told us that the longer the hiring process takes to complete, the more likely it was that an applicant will drop out. Further, qualified applicants may also decide to apply for employment at a competing law enforcement agency that may have a less rigorous process than CBP’s, according to CBP officials. One factor that affects CBP’s ability to efficiently process and onboard law enforcement officers are specific hiring process steps that are time- consuming and challenging for candidates to complete. For example, CBP officials cited the polygraph examination as a significant bottleneck within CBP’s hiring process. In addition to having the lowest pass rate of any step in CBP’s process, the polygraph examination also took CBP officer and Border Patrol agent applicants, on average, the longest amount of time to complete in fiscal year 2017—74 days and 94 days, respectively. Further, CBP officials told us that these already lengthy time frames may increase further because of the growing number of applicants for CBP’s law enforcement positions. In addition, on average, it took CBP law enforcement officer applicants across all three components 55 days or more to complete the medical examination and more than 60 days to complete the background investigation. CBP’s Accenture Contract Is Intended to Further Enhance CBP’s Recruitment and Hiring Efforts In November 2017, CBP hired a contractor—Accenture Federal Services, LLC—to help the agency recruit and hire the 5,000 Border Patrol agents called for in Executive Order 13767, as well as an additional 2,000 CBP officers and 500 AMO personnel. Specifically, at the time of our June 2018 report, the contract had a total potential period of 5 years at a not- to-exceed value of $297 million. The contract included a base year and four 1-year option periods, which CBP may exercise at its discretion for a total potential period of 5 years. Under this performance-based contract, Accenture is responsible for enhancing CBP’s recruitment efforts and managing the hiring process for those applicants it recruits. We reported that the Accenture contract is intended to enhance CBP’s recruitment efforts by improving its marketing strategy and utilizing new ways to capture and analyze data to better inform recruitment efforts, according to CBP officials. To meet target staffing levels, CBP expected that the contractor would augment CBP’s current hiring infrastructure while pursuing new and innovative hiring initiatives. Specifically, the contractor is responsible for implementing the same hiring process steps and ensuring that all applicants recruited by Accenture meet CBP’s standards. CBP officials also told us that Accenture has the flexibility to pursue novel hiring tactics and pilot initiatives that CBP may not have considered or been able to undertake. For example, Accenture plans to pilot innovative ways to reduce the time-to-hire, including by streamlining steps in the hiring process, which could help to improve CBP’s overall process and generate increased hires for law enforcement positions. At the time of our June 2018 report, some key issues were still being negotiated between CBP and the contractor. For example, while CBP officials told us that the main metric used to assess Accenture’s effectiveness will be the total number of hires the contractor produces, they were still working to finalize other key metrics for evaluating the contractor’s effectiveness as well as an oversight plan to ensure the contractor operates according to agency requirements. As a result, we reported that it was too early to determine whether these initiatives would help increase the number and quality of applicants for CBP’s law enforcement officer positions. We also reported that it was too early to evaluate whether the contractor would be able to efficiently and effectively provide the surge hiring capacity CBP needs to achieve its staffing goals. CBP Has Enhanced Its Retention Efforts, but Does Not Systematically Collect and Analyze Data on Departing Law Enforcement Personnel Retaining Law Enforcement Officers in Hard-to-Fill Locations Has Been Challenging for CBP In June 2018, we reported that CBP’s annual rates of attrition were relatively low, but CBP faced challenges retaining law enforcement officers in hard-to-fill locations. From fiscal years 2013 through 2017, OFO’s annual attrition rates for the CBP officer position were consistent at about 3 percent, while rates for Border Patrol agent and AMO’s Marine Interdiction Agent positions were below 5 percent in 4 out of the 5 fiscal years we reviewed. When we compared CBP’s annual attrition rates for these positions to those of other selected law enforcement agencies, we found that CBP’s attrition rates were similar to U.S. Immigration and Customs Enforcement’s (ICE) annual attrition rates for its law enforcement positions and generally lower than those of the Secret Service and the Federal Bureau of Prisons. Annual attrition rates for AMO’s aviation positions were higher, ranging from 5.0 percent to 9.2 percent for the Air Interdiction Agent position and 7.8 percent to 11.1 percent for the Aviation Enforcement Agent position. Even so, fiscal years 2015 through 2017, attrition rates for these positions have generally remained lower than those of the Secret Service and the Bureau of Prisons. In addition, from fiscal years 2013 through 2017, CBP’s ability to hire more law enforcement officers than it lost varied across positions. Specifically, CBP consistently hired more CBP officers and Aviation Enforcement Agents than it lost. Further, while CBP generally maintained its staffing levels for Marine Interdiction Agents, the agency consistently lost more Border Patrol agents and Air Interdiction Agents than it hired. Even so, onboard staffing levels for all five of CBP’s law enforcement officer positions have consistently remained below authorized staffing levels. CBP has acknowledged that improving its retention of qualified law enforcement personnel is critical in addressing staffing shortfalls, but CBP officials identified difficulties in retaining key law enforcement staff as a result of geographically-remote and hard-to-fill duty locations. CBP officials across all three operational components cited location—and specifically employees’ inability to relocate to posts in more desirable locations—as a primary challenge facing the agency in retaining qualified personnel. Border Patrol officials explained that duty stations in certain remote locations present retention challenges due to quality-of-life factors. For example, the officials told us that agents may not want to live with their families in an area without a hospital, with low-performing schools, or with relatively long commutes from their homes to their duty station. Border Patrol’s difficulty in retaining law enforcement staff in such locations is exacerbated by competition with other federal, state, and local law enforcement organizations for qualified personnel. According to Border Patrol officials, other agencies are often able to offer more desirable duty locations—such as major cities—and, in some cases, higher compensation. CBP data indicate that Border Patrol agents consistently leave the component for employment with other law enforcement agencies, including OFO as well as other DHS components such as ICE. For example, while retirements accounted for more than half of annual CBP officer losses from fiscal years 2013 through 2017, they accounted for less than a quarter of annual Border Patrol agent losses, indicating that the majority of these agents are not retiring but are generally leaving to pursue other employment. Further, according to CBP data, the number of Border Patrol agents departing for employment at other federal agencies increased steadily, from 75 agents in fiscal year 2013 to 348 agents in fiscal year 2017—or nearly 40 percent of all Border Patrol agent losses in that fiscal year. Border Patrol officials told us, for example, that working a standard day shift at ICE in a controlled indoor environment located in a major metropolitan area for similar or even lower salaries presents an attractive career alternative for Border Patrol agents who often work night shifts in extreme weather in geographically remote locations. The President of the National Border Patrol Council also cited this challenge, stating that unless Border Patrol agents have a strong incentive to remain in remote, undesirable locations—such as higher compensation when compared with other law enforcement agencies—they are likely to leave the agency for similar positions located in more desirable locations. While OFO officials told us the component did not face an across-the- board challenge in retaining CBP officers, they have had difficulty retaining officers in certain hard-to-fill locations that may be geographically remote or unattractive for families, such as Nogales, Arizona, and San Ysidro, California. As a result, CBP officer staffing levels in these locations have consistently remained below authorized targets. AMO has also had difficulty retaining its law enforcement personnel—and particularly its Air Interdiction Agent staff—in hard-to-fill locations, such as Aguadilla, Puerto Rico, and Laredo, Texas. However, given the unique qualifications and competencies required for the Air Interdiction Agent position, AMO does not compete with other law enforcement organizations. Instead, AMO officials told us they compete with the commercial airline industry for qualified pilots. Specifically, they stated that this competition is exacerbated by a nationwide shortage of pilots. In addition, AMO officials explained that there is a perception among applicants that commercial airlines are able to offer pilots more desirable locations and higher compensation. However, they told us that AMO generally provided pilots with higher starting salaries than many regional airlines as well as most career options available to helicopter pilots. CBP Has Taken Steps to Address Retention Challenges All three CBP operational components have taken steps to retain qualified law enforcement personnel by offering opportunities for employees to relocate to more desirable locations and pursuing the use of financial incentives, special salary rates, and other payments and allowances. Relocation opportunities. Border Patrol, OFO, and AMO have formal programs that provide law enforcement officers with opportunities to relocate. For example, in fiscal year 2017, Border Patrol implemented its Operational Mobility Program and received initial funding to relocate about 500 Border Patrol agents to new locations based on the component’s staffing needs. According to Border Patrol officials, retaining current employees is a top focus for leadership at the component and this program provides Border Patrol agents with opportunities for a paid relocation to a more desirable location at a lower cost to CBP than an official permanent change of station transfer. As of April 2018, Border Patrol officials told us that 322 Border Patrol agents had accepted reassignment opportunities through the program and the component hoped to continue receiving funding to provide these opportunities. Financial Incentives and Other Payments and Allowances. CBP’s three operational components have also taken steps to supplement employees’ salaries through the use of human capital flexibilities—such as retention and relocation incentives and special salary rates—as well as other payments and allowances. CBP’s goal in pursuing these human capital flexibilities is to retain current employees—especially in remote or hard-to-fill locations—who are likely to internally relocate within CBP to more desirable duty locations or depart the agency for similar positions at other law enforcement organizations or commercial airlines. However, we found that from fiscal years 2013 through 2017, CBP’s use of such financial incentives and other payments was limited, as the agency paid a total of four retention incentives and 13 relocation incentives, and implemented one special salary rate for all positions during this 5-year period. From fiscal years 2013 through 2017, Border Patrol did not offer retention incentives to agents and paid two relocation incentives to transfer Border Patrol agents to Artesia, New Mexico, and Washington, D.C., at a cost of roughly $78,000. However, in fiscal year 2018, Border Patrol increased its use of relocation incentives to facilitate the transfer of agents to duty stations along the southwest border that are less desirable due to the remoteness of the location and lack of basic amenities and infrastructure. Specifically, as of April 2018, 67 Border Patrol agents had received such incentives to relocate to duty stations in Ajo, Arizona; Calexico, California; and Big Bend, Texas; among others. While Border Patrol did not offer retention incentives during our review period, it submitted a formal request to CBP leadership in February 2018 for a 10 percent across-the-board retention incentive for all Border Patrol agents at the GS-13 level and below, which represents the majority of the component’s frontline workforce. According to Border Patrol documentation, these incentives, if implemented, could help reduce Border Patrol’s attrition rate—which has consistently outpaced its hiring rate—by helping retain agents who may have otherwise left Border Patrol for similar positions in OFO, ICE, or other law enforcement agencies. According to CBP officials, as of April 2018, CBP leadership was evaluating Border Patrol’s group retention incentive request, including the costs associated with implementing this 10 percent across-the-board incentive. In addition, as the incentive would benefit Border Patrol agents in all of the component’s duty locations, the extent to which this effort would be effective in targeting agent attrition in the remote locations that represent CBP’s largest staffing challenges remains to be seen. Border Patrol approved the 10 percent retention incentive and is awaiting funding for implementation, according to officials. From fiscal years 2013 through 2017, OFO paid a total of four retention incentives at a cost of $149,000 to retain CBP officers in Tucson, Arizona; Detroit, Michigan; Carbury, North Dakota; and Laredo, Texas. Further, OFO paid seven relocation incentives at a cost of approximately $160,000 to relocate personnel to the hard-to-fill ports of Alcan and Nome, Alaska; Coburn Grove, Maine; and Detroit, Michigan. One OFO official told us OFO did not regularly use these incentives because its relatively low annual attrition rates make it difficult to propose a persuasive business case to CBP leadership that such incentives are necessary. Further, another OFO official explained that OFO’s strategy is focused on using recruitment incentives to staff hard-to-fill locations with new employees. From fiscal years 2013 through 2017, AMO did not offer retention incentives to law enforcement personnel and paid a total of four relocation incentives to transfer three Air Interdiction Agents and one Marine Interdiction Agent to Puerto Rico at a cost of approximately $84,000. However, AMO has taken steps to pursue additional human capital flexibilities to address its difficulty in retaining Air Interdiction Agents, including a group retention incentive and a special salary rate. CBP Does Not Have a Systematic Process to Capture and Analyze Data on Departing Law Enforcement Officers In June 2018, we reported that CBP does not have a systematic process for capturing and analyzing information on law enforcement officers who are leaving, such as an exit interview or survey. As a result, the agency does not have important information it could use to help inform future retention efforts. Standards for Internal Control in the Federal Government states that management should obtain relevant data from reliable sources and process these data into quality information to make informed decisions in achieving key objectives. Taking steps to ensure that the agency’s operational components are systematically collecting and analyzing complete and accurate information on all departing law enforcement officers—including the factors that influenced their decision to separate—would better position CBP to understand its retention challenges and take appropriate action to address them. We recommended that CBP should ensure that its operational components systematically collect and analyze data on departing law enforcement officers and use this information to inform retention efforts. CBP agreed with the recommendation. CBP officials reported in February 2019 that they developed and implemented a CBP-wide exit survey in August 2018 and have taken steps to promote the survey and encourage exiting CBP employees to fill it out. The officials also noted that they plan to analyze the survey results on a quarterly basis starting in April 2019. These actions, if fully implemented, should address the intent of our recommendation. Chairwoman Torres Small, Ranking Member Crenshaw, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you or the members of the committee may have. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Adam Hoffman (Assistant Director), Bryan Bourgault, Sasan J. “Jon” Najmi, and Michelle Serfass. This is a w ork of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety w ithout further permission from GAO. How ever, because this w ork may contain copyrighted images or other material, permission from the copyright holder may be necessary if you w ish to reproduce this material separately.
Why GAO Did This Study CBP is responsible for securing U.S. borders and employs nearly 45,000 law enforcement officers across its three operational components at and between U.S. ports of entry, in the air and maritime environment, and at certain overseas locations. In recent years, CBP has not attained target staffing levels for its law enforcement positions, citing high attrition rates in some locations, a protracted hiring process, and competition from other law enforcement agencies. This statement addresses CBP's efforts to (1) recruit and more efficiently hire law enforcement applicants, and (2) retain law enforcement officers. This statement is based on a GAO report issued in June 2018 on CBP's recruiting, hiring, and retention efforts along with updates as of February 2019 on actions CBP has taken to address GAO's prior recommendation. For the previous report, GAO analyzed CBP data on recruitment efforts, hiring process steps, and retention rates; examined strategies related to these activities; and interviewed CBP officials and union groups. GAO also reviewed information on CBP actions to implement GAO's prior recommendation. What GAO Found In June 2018, GAO reported that U.S. Customs and Border Protection (CBP) increased its emphasis on recruitment by establishing a central recruitment office in 2016 and increasing its participation in recruitment events, among other things. As a result, the number of applications it received for law enforcement positions across its operational components—the Office of Field Operations, U.S. Border Patrol, and Air and Marine Operations—more than tripled from fiscal years (FY) 2013 through 2017. Also, in November 2017, CBP hired a contractor to more effectively target potential applicants and better utilize data to enhance CBP's recruitment and hiring efforts. However, at the time of GAO's June 2018 report, it was too early to gauge whether the contractor would be effective in helping CBP to achieve its goal to recruit and hire more law enforcement officers. CBP improved its hiring process as demonstrated by two key metrics—reducing its time-to-hire and increasing the percentage of applicants that are hired. As shown in the table, CBP's time-to-hire decreased from FY 2015 through 2017. CBP officials stated that these improvements, paired with increases in applications, have resulted in more hires. However, the hiring process remains lengthy. For example, in FY 2017, CBP officer applications took more than 300 days, on average, to process. Certain factors contributed to the lengthy time-to-hire, including process steps that can be challenging and time-consuming for applicants to complete—such as the polygraph exam—as well as CBP's reliance on applicants to promptly complete certain aspects of the process—such as submitting their background investigation form. CBP enhanced its efforts to address retention challenges. However, staffing levels for law enforcement positions consistently remained below target levels. For example, CBP ended FY 2017 more than 1,100 CBP officers below its target staffing level. CBP officials cited employees' inability to relocate to more desirable locations as the primary retention challenge. CBP offered some relocation opportunities to law enforcement personnel and has pursued the use of financial incentives and other payments to supplement salaries, especially for those staffed to remote or hard-to-fill locations. However, retaining law enforcement officers in hard-to-fill locations continues to be challenging for CBP. GAO reported that CBP could be better positioned to understand its retention challenges and take appropriate action to address them by implementing a formal process for capturing information on all departing employees. In response, CBP officials reported taking steps to implement a CBP-wide exit survey and plan to analyze the results of the survey quarterly, beginning April 2019. What GAO Recommends GAO recommended in its June 2018 report that CBP systematically collect and analyze data on departing law enforcement officers and use this information to inform retention efforts. DHS concurred, and CBP has actions planned or underway to address this recommendation.
gao_GAO-20-189
gao_GAO-20-189_0
Background Overview of the EDA Program The EDA program is one of several programs designed to build partner capacity through the provision of excess defense equipment and services to foreign governments or international organizations such as the North Atlantic Treaty Organization (NATO). These excess items are provided as part of U.S. security assistance efforts and help to support U.S. foreign policy and national security objectives. The Foreign Assistance Act permits the transfer of excess defense articles provided that such transfers will not adversely affect the industrial base. In particular, under the Act, transfers must not reduce the opportunity for U.S. contractors to sell new or used defense equipment to countries requesting the transfer. Excess defense items can include aircraft, ammunition, clothing, radios, trucks, and spare parts. According to DOD officials, the vast majority of EDA items are low- to medium-level technologies that, if not transferred, would either be stored at cost to DOD or destroyed. Excess defense items can be transferred as grants—as permitted by the Foreign Assistance Act—or sold to eligible foreign governments at a reduced cost in “as is, where is” condition pursuant to the Arms Export Control Act. This means that the requesting foreign government is generally required to pay all repair or refurbishment costs, as well as all costs associated with transporting the EDA item—which can be located in the United States or outside the continental United States. As previously mentioned, for purposes of this report, transfers refer to grants of EDA items unless otherwise indicated. DSCA has overall responsibility for administering the EDA program. The Director of DSCA has been delegated authority to make the determination on whether a proposed transfer could adversely affect the industrial base. The military departments determine when defense items are no longer needed and can designate them as excess and, upon approval, can offer them as EDAs. Multiple federal entities play a role in the EDA program, as illustrated in figure 1. Following the interagency coordination, if DSCA determines the proposed transfer will not adversely affect industry and thus can proceed, DSCA notifies Congress about proposed transfers that are valued at over $7 million or that contain significant military equipment. As part of the congressional notification, DSCA provides information on (1) the purpose for which the item is being provided to the country, (2) whether the item has been previously provided to the country, (3) the current value and original acquisition value of the item, and (4) its findings regarding how industry will be affected by the proposed transfer. After a 30-day congressional notification period, DSCA authorizes the proposed transfer in consultation with State, provided that Congress does not object and all agencies concur with the transfer. DSCA follows the same process to review and approve all proposed EDA transfers—including for excess Humvees. One unique difference for Humvee transfers is a 2018 legislative requirement that Humvees be modernized with an armored or armor-capable crew compartment and a new modernized powertrain prior to a transfer, unless a waiver is granted. DOD Humvee Procurement and Sustainment Humvees, which are four-wheel drive military light trucks, have been part of DOD’s light tactical wheeled vehicle fleet since the 1980s. While the Army is the program office for Humvees, the vehicles have been used by other military departments in support of their own combat operations. Humvees were initially fielded to serve as a light, highly mobile and unarmored vehicle and are commonly used for combat operations; however, the Army National Guard also procures these vehicles for use in homeland defense and natural disaster relief operations. In efforts to adapt the Humvee to modern requirements for combat operations, the Army has increased the performance and protection of the vehicle over time. Over the past 30 years, AM General has produced three models—the M900, M1000, and M1100 series. The company no longer produces the M900 and M1000 series for combat operations and certain parts and components that are unique to these vehicles are obsolete or otherwise not readily available. The M1100 series, which is still in production and supports combat operations and many non-combat related operational and support missions, offers newer capabilities such as increased weight capacity. With the additional weight capacity, the 1100 series is the only model that can support the added armor requirements under the new legislative requirement without a substantial overhaul. Figure 2 highlights some of the capabilities of the different Humvee models. DOD’s light tactical wheeled vehicle strategy has changed since 2010, following lessons learned from military operations in Iraq and Afghanistan. DOD plans to shift from procuring new Humvees to sustaining existing vehicles in its fleet. In its 2014 Tactical Wheeled Vehicle Strategy, the Army stated plans to buy fewer new Humvees because the vehicle no longer fully meets its evolving mobility or protection requirements. While DOD decreased its procurement of Humvees for military operations, it has plans to upgrade and refurbish existing vehicles. There are nearly 300,000 Humvees or vehicles with the Humvee chassis operating globally by the U.S. military and other foreign governments. These vehicles are expected to require ongoing maintenance and upgrades for the next 20 to 30 years. DOD routinely conducts industrial base risk assessments to gain insight on the viability of current suppliers to meet its current and future requirements. The assessment takes into account a range of considerations including (1) factors that could cause a current supplier to go out of business or exit the market and (2) the extent to which an existing supplier relies on DOD, foreign military sales, or commercial sales. While these assessments are not routinely conducted as part of the excess defense article (EDA) transfer process, they may be undertaken to provide input on EDA transfers, as needed. The Army has efforts underway to acquire a new vehicle—the Joint Light Tactical Vehicle (JLTV)—to meet its future requirements. Although a different manufacturer was awarded the JLTV contract, in its industrial base risk assessment for this requirement, the Army stated it intends to maintain two manufacturers—including AM General—to meet its ongoing needs for light tactical wheeled vehicles. In a 2018 congressional briefing, the Army’s Acquisition, Logistics, and Technology Command estimated maintaining a relatively even mix of both vehicles—54,810 Humvees from existing inventory and 49,099 new JLTVs—to sustain operations for the foreseeable future. However, the Army is conducting a more comprehensive review of its light tactical vehicle requirements and plans to release its findings in an updated acquisition strategy expected in 2022. DSCA Approved Almost Half of Humvee Requests to Aid Foreign Governments’ Security Needs, but Approvals Have Halted Since 2017 DOD approved nearly half of the total Humvees requested by foreign governments for fiscal years 2012 through 2018. The requests were in support of foreign governments’ security efforts, such as counterterrorism. However, the number that was actually delivered was less than those approved because DOD decreased the number or foreign governments canceled their requests for various reasons. DSCA halted approvals of EDA Humvee requests since the start of fiscal year 2017 and raised concerns about the new statutory requirement to modernize Humvees prior to transfer. Nearly Half of EDA Humvee Transfer Requests Were Approved but Number Delivered Was Reduced for Various Reasons DOD approved nearly half of the total Humvees requested by foreign governments for fiscal years 2012 through 2018—7,612 vehicles of the 16,005 excess Humvees requested—but has not approved Humvee requests made since the start of fiscal year 2017. Figure 3 shows the number of Humvees requested and approved for transfer each fiscal year. The Majority of Humvee Requests Came from the Middle East and Africa Regions In our analysis of data provided by the Army and DSCA, we found from fiscal years 2012 through 2018, that 23 countries submitted requests for Humvees, including some requests in fiscal year 2018. The delivery of EDA items under the Foreign Assistance Act to certain countries is given priority to the maximum extent feasible. These countries include certain NATO countries, major non-NATO allies in the Middle East and Africa regions, and the Philippines. We found that the Middle East and Africa regions accounted for 75 percent of the vehicles requested over this period. Figure 4 shows the regional distribution of requests. The majority of requests for Humvees from countries in the Middle East and Africa regions were primarily to support various security-related missions. For example, one country requested excess Humvees for border security, counter smuggling, and counterterrorism operations. Such security-related efforts by foreign countries align with the U.S. 2018 National Defense Strategy, which states DOD’s objective to prevent terrorism globally and aid U.S. foreign partners in their counter-terrorism efforts. Additionally, the strategy aims to strengthen alliances and attract new partners by increasing interoperability to work together and effectively achieve military objectives. DSCA is required to state the comparative foreign policy benefits that the United States would gain from a grant transfer rather than a sale when it notifies Congress about a proposed transfer. In the documents we reviewed, DSCA cited foreign policy benefits such as increasing the capability of countries to take on a greater share of military operations, supporting joint operations with NATO, or counterterrorism and counter- narcotics operations. For example, for one request, DSCA determined that a requested transfer was in the U.S. national interest, as equipping the foreign country’s armed forces with Humvees would allow them to have an increased role in military operations in the Africa region. In turn, this would reduce the country’s reliance on U.S. forces for NATO operations. In addition to requesting vehicles for security-related operations, some countries planned to use vehicles for spare parts or had plans to refurbish the vehicles on their own. We found that about two-thirds of the Humvees delivered through the EDA program from fiscal years 2012 through 2018 were older models—either M900 or M1000 series—rather than the newer M1100 series. Most countries receiving deliveries of older models were seeking to replace existing vehicles in their fleet or to use EDA Humvees for spare parts. DSCA Halted Approvals and Cited Challenges Regarding New Statutory Requirements As previously mentioned in this report, DSCA has not approved any EDA Humvee requests since the start of fiscal year 2017. One reason, according to our analysis of DSCA data, is the manufacturer’s objections to proposed transfers. Another is because of the legislative provision in the Fiscal Year 2018 NDAA that requires Humvees to be modernized with an armored or armor-capable crew compartment and new, modernized powertrain prior to transferring. The corresponding conference report stated the conferees’ expectation that any modernization and refurbishment work must generally be done at no cost to DOD. According to DOD, the cost to modernize would be incurred by the requesting foreign government. Since the provision’s enactment, DOD has not exercised the authority to waive this legislative requirement for any Humvee request. Foreign governments have not been willing to pay for the modernization, so approvals have halted. Since the enactment of the modernization requirement in December 2017, DSCA has received requests for 4,103 Humvees. According to DSCA officials, when a foreign government submits a letter of request for EDA Humvees, DSCA notifies the country of the modernization requirement and its responsibility to pay for the cost to refurbish the vehicles in accordance with the law. In DOD documents we reviewed, foreign governments cited having limited budgets and being financially unable to purchase defense equipment such as Humvees. As such, they rely on the EDA program to acquire defense items. DSCA officials told us that the modernization work is to be done at no cost to the U.S. government; however, they added that paying the cost to modernize Humvees can be cost-prohibitive for foreign governments. Foreign governments can request, through DSCA, that the modernization requirement be waived. Since December 2017, according to DSCA officials, DSCA has received waiver requests from three foreign governments but has not exercised the waiver authority. According to DSCA officials, these requests likely will remain unapproved for the foreseeable future; however, the provision requiring the refurbishment of excess Humvees prior to transfer is set to expire in December 2020. According to DSCA officials, DSCA plans to resume its normal EDA approval process thereafter. Currently, according to DSCA officials, they are encouraging foreign governments to look at other options to meet their fleet requirements, including purchasing new Humvees. However, DSCA officials acknowledge that, if a foreign government cannot afford to buy new vehicles, DOD does not have any low-cost vehicles to offer as an alternative solution. However, DOD officials and Army documents we reviewed noted that even if foreign governments were able to independently fund the modernization costs, there are not sufficient quantities of the newer model Humvees—M1100 series—in inventory that can support the additional weight of the added armored capabilities for the modernized crew compartment. According to DSCA documentation, the EDA program has a little over a hundred vehicles that could be refurbished to the modernization requirements. Additionally, most of the Humvees in DOD’s inventory are older models that would first require a new expanded vehicle chassis to withstand the weight of adding armor. The officials likened the modernization process for the older model Humvees to essentially building a whole new vehicle. Determinations of Adverse Industrial Base Effects Are Driven by Increasing Objections from the Manufacturer, but Mitigation Actions Have Been Taken DSCA’s determinations of whether there is an adverse industrial base effect to approve Humvee transfers are largely based on objections from the manufacturer about the proposed transfers. Since 2015, the Humvee manufacturer has objected more frequently to the transfer of vehicles to foreign governments. In all but one instance when the manufacturer objected to a transfer, we found that DSCA and BIS took steps to address concerns of the Humvee manufacturer and reach a resolution, such as providing the manufacturer Humvee refurbishment work. Manufacturer’s Objection Is the Primary Factor in DSCA’s Determination DSCA’s decision on whether there is an adverse industrial base effect to approve a transfer of Humvees is largely based on the manufacturer’s perspective on a proposed transfer. DSCA has considerable latitude for such decisions as the Foreign Assistance Act, as delegated, does not specify how determinations should be made on whether proposed transfers could adversely affect U.S. industries. Historically, DSCA has sought input from BIS to aid its determination about potential industrial base effects of proposed transfers. According to DSCA officials, all proposed EDA Humvee transfers have undergone an assessment of adverse industrial base effect by BIS. We found that BIS actively engages the Humvee manufacturer on proposed transfer requests and supported all but one objection from fiscal years 2012 through 2018. BIS’s standard practice is to collect information from the prime contractor and other suppliers to inform its recommendation to DSCA about possible industrial base effects. As part of its efforts regarding proposed Humvee transfers, BIS notified AM General and provided information on all the transfer requests including the requesting country; number of vehicles requested; the vehicle model; and the country’s plans, if known, to repair or upgrade EDA vehicles, including who the country intends to select for such work. BIS officials told us that they request a response within 7 calendar days on whether the manufacturer supports or objects to the proposed transfer. In instances where the Humvee manufacturer objected to a transfer, BIS required that the manufacturer provide an explanation of its objection. In documents we reviewed, the manufacturer objected for various reasons, including that a transfer would: (1) directly interfere with ongoing marketing or planned sales to the requesting country, or (2) adversely affect its business and that of its suppliers. BIS’s standard procedure is to request proof of ongoing sales efforts if a company states that a proposed transfer will interfere with potential sales. In the cases where the Humvee manufacturer cited ongoing or planned business development with a requesting country, BIS required that the manufacturer provide information of its ongoing efforts to sell its vehicles to the requesting country, including: documentation of recent or planned meetings with foreign government officials and a timeline of the meetings; export licenses; and business plans. If a manufacturer submits an objection, BIS will also check if they have registered business activity with Commerce’s Advocacy Center, which provides assistance to defense companies pursuing contracts with overseas governments and government agencies. If BIS concludes the Humvee manufacturer has a basis for its objection due to ongoing business with the requesting country, it will recommend that DSCA not authorize the transfer. According to DSCA officials, this is largely because it considers the possibility that the transfer could dissuade requesting foreign governments from purchasing new or used vehicles. Thus, providing vehicles through the EDA program at no cost or a discounted price to a foreign government could siphon potential business from the manufacturer or could compete with the manufacturer’s sales efforts. Under the Foreign Assistance Act, a transfer request cannot be fulfilled if doing so will interfere with the manufacturer’s ability to sell equipment to the requesting country. During fiscal years 2012 through 2018, we found only one instance where DSCA, based on BIS’s recommendation, did not support AM General’s objection. In that case—a request for Humvees from Albania—DSCA moved forward and approved a Humvee transfer because the manufacturer could not demonstrate ongoing business with the requesting country. Humvee Manufacturer Has Increasingly Objected to Transfers, Leading to Delays in Providing Vehicles to Requesting Countries AM General has objected more frequently to the transfer of vehicles to foreign governments since March 2015. In 2015, the JLTV production contract was awarded to another contractor and the Humvee manufacturer sold its commercial automotive plant, both of which occurred in the wake of decreasing or nonrecurring DOD Humvee procurements in comparison to past years. In total, the Humvee manufacturer has challenged 11 transfer requests for over 4,000 vehicles between fiscal years 2015 and 2018. The manufacturer told us that the increasing number of proposed transfers is concerning because the transfers amount to nearly 3 years’ worth of new vehicles it could produce to sustain its production lines. AM General representatives told us they will continue to object to the transfer of older Humvee vehicles (M900 and M1000 models). For these models, the representatives citied concerns that parts for these vehicles are no longer in production, and thus the manufacturer cannot ensure qualified parts are available for maintenance and repairs. They are also concerned that older vehicles have the propensity to break down, which could damage the Humvee brand internationally—particularly, if counterfeit parts are used. In our review of documents describing requesting countries’ use of vehicles, we found that older model vehicles are, at times, accepted by foreign governments to use as spare parts to maintain an existing fleet and to develop their workforce’s capability to repair vehicles. However, we found that since 2015, the majority of vehicles to which the manufacturer objected were the newer M1100 models—stemming largely from a single 2016 request for Afghanistan. To support its objections to this transfer, AM General has stated that its own international sales are an important source of revenue, particularly because DOD has reduced its procurement of Humvees. AM General representatives explained that proposed transfers through the EDA program can threaten their company’s potential future sales to foreign governments that may be less likely to purchase new Humvees if DSCA approves transfers of used vehicles. According to the manufacturer, each transfer is a potential one-for-one reduction of a possible sale of a new vehicle to the requesting country, which can affect its bottom line as well as the suppliers that provide parts and materials to produce the Humvees. In our review of Army procurement data, we found that many countries that requested excess Humvees have not purchased them through the FMS program from fiscal years 2012 through 2018. DSCA officials told us that most of the countries requesting Humvees through the EDA program find it cost-prohibitive to purchase new Humvee vehicles directly from the manufacturer. A new Humvee can cost between $115,000 and $190,000 depending on the model and capabilities included. As a result, these countries rely on EDA Humvees provided through grants to sustain their military fleets. Figure 5 shows the number of Humvees procured by DOD relative to the number of vehicles foreign governments bought through the FMS program and those they were granted via the EDA program. We found that from fiscal years 2012 to 2018 AM General’s objections to proposed EDA Humvee transfers have increased the time that it takes for DSCA and BIS to review and make their determinations. If the manufacturer did not object to a transfer, which was largely the case prior to March 2015, BIS provided its recommendation to DSCA, on average, within 21 days. However, our analysis showed that an objection to a Humvee transfer on average added approximately 152 days to address industry objections. DSCA officials acknowledged that the approval process can be prolonged when the manufacturer objects to a proposed transfer, potentially contributing to longer waiting periods for requesting countries to receive the Humvees. In addition, the longer that vehicles remain in storage, the more likely it is that they will require more repairs to make them operational, resulting in increased costs to the requesting foreign governments to refurbish them, according to a DSCA official. Manufacturer representatives also told us they want to be involved earlier in the process to provide input on the potential effects of proposed transfers. We found that, on average, DSCA notifies BIS about 4 months after a country submits its Humvee request and BIS reaches out to the manufacturer a day or two later. A DSCA official explained that it can be a challenge to involve the manufacturer earlier because the request is not fully stable and could be revised for a number of reasons, including countries canceling the request or changing requirements to obtain different capabilities, and DOD internal policy considerations need to be vetted before reaching out to the manufacturer. Agencies Took Steps to Address Manufacturer’s Concerns In recent years, DSCA and BIS have taken steps to address AM General’s increasing objections to proposed transfers. In 2018, BIS modified its approach to assess adverse effects of Humvee transfers to consider an additional factor. Now, BIS considers the cumulative effect and totality of previous EDA Humvee requests, in addition to assessing each request on a case-by-case basis. According to BIS officials, this was in response to the pattern of consistent objections that they were receiving from the Humvee manufacturer. AM General acknowledged that communication with DSCA and BIS about Humvee EDA transfers has improved. For example, DSCA notified AM General about its decision to sustain the company’s objection and, thus, not move forward on a transfer request made in July 2019 for 2,000 vehicles. AM General told us that in the past, DSCA did not notify AM General about whether it had sustained or overruled the company’s objection to a proposed transfer. AM General’s objections to EDA Humvee transfers have at times led to additional business channels for the Humvee manufacturer. For example, we found that the manufacturer received business opportunities from EDA Humvee transfers to Afghanistan, Iraq, Jordan, and Thailand that included, providing long-term sustainment and refurbishment of Humvees, among other things. In response, the contractor withdrew over a third of its objections between fiscal years 2012 and 2018 based on receiving this type of work or reaching agreements with foreign governments to provide fully operational Humvees. The remaining transfers were cancelled; put on hold pending resolution with the Humvee manufacturer; or in one case, moved forward with an objection in place. The agreements to provide additional support can be financially beneficial to the manufacturer and help sustain its production capabilities. For example, we found that in 2012, the Humvee manufacturer objected to a country’s transfer request of 250 vehicles, but withdrew their objection after reaching an agreement with the foreign government to perform much of the refurbishment work for those vehicles. In another case, we found that for the 2016 proposed transfer of 2,461 vehicles to support the Afghanistan National Security Force, the Humvee manufacturer objected, citing concerns about the large number of vehicles requested, among other concerns (see sidebar). The proposed transfer of EDA Humvees to Afghanistan was requested by DOD after a 2016 Senate report expressed concerns about a lack of insight into the cost-benefit analysis of procuring new equipment instead of refurbishing excess equipment. In response to the proposed transfer, the manufacturer sent a letter to BIS outlining their anticipated role in the Afghanistan transfer, including obtaining Army contracts to add armor kits to EDA vehicles, providing new powered chassis, and if required, new Humvees. The letter also noted the Humvee manufacturer’s withdrawal of its objection to the transfer. DSCA subsequently notified the manufacturer that it did not agree with the terms AM General outlined in the letter to BIS and specified that the proposed transfer would create business opportunities for U.S. industry, including AM General, to refurbish EDA Humvees. DSCA also added that it would continue to ensure that industry is notified of all proposed Humvee EDA transfer requests so that industry can provide input or express concerns. According to DOD officials, the number of Humvees available for transfer to Afghanistan was reduced as DOD decided to split the number of available EDA Humvees in inventory at the time to meet requirements in Afghanistan and Iraq. In total, 1,644 vehicles were identified for transfer to Afghanistan. As part of this effort, according to information we received from the Army, AM General, and the Office of the Undersecretary of Defense for Policy, the Humvee manufacturer was awarded a contract to provide armor kits for the 1,644 EDA Humvees being refurbished by the Army’s Red River Depot. The manufacturer also provided other vehicle parts as part of the EDA transfer request for Afghanistan. According to DOD officials, it currently does not have plans to transfer additional vehicles to Afghanistan to fulfill the remaining EDA vehicles requested as part of the 2016 transfer request and will reevaluate future Afghanistan requirements, as needed. Agency Comments We provided a draft of this report to the Departments of Commerce and Defense for review and comment. Both agencies provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretaries of the Departments of Commerce and Defense. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology This report provides information about (1) DOD’s approval of grant transfers of excess High Mobility Multipurpose Wheeled Vehicles (HMMWV)—commonly pronounced Humvees— requested by foreign governments from fiscal years 2012 through 2018 and (2) how the Humvee manufacturer’s perspectives on the proposed transfers have been addressed by DOD as part of the determination of any adverse industrial base effects. To provide information about DOD’s approval of transfers of excess Humvees, we analyzed data for fiscal years 2012 through 2018 (the most recent available fiscal year at the time of our review) from the U.S. Army Security Assistance Command, Defense Security Cooperation Agency (DSCA), and the Defense Logistics Agency (DLA). These data provided insight about the countries and geographic regions that have requested excess defense article (EDA) Humvees as well as the condition and types of vehicles delivered to foreign governments. We also reviewed documentation provided by requesting countries to identify the intended purpose of the request. We interviewed agency officials responsible for the data to identify the quality controls in place to help ensure the data are accurate and reliable. To assess the reliability of each data source, we compared the data in each DOD component’s data sets to ensure that the information was complete and consistent. We did this by identifying common identifiers used for the Humvee EDA transfers that occurred within the designated 7-year period. According to DSCA officials, the DSCA EDA database is a consolidation of data provided annually by the military departments, and DLA, and is manually entered into the database by DSCA officials. Furthermore, we reviewed the data for issues such as missing data elements and duplicates, among other steps. Based on these steps taken, we determined the data were sufficiently reliable for the purposes of reporting information about EDA Humvee transfer requests. See table 1 of DOD data sources used to track information on excess defense articles. To provide information about how the Humvee manufacturer’s perspectives on the proposed transfers have been addressed by DOD as part of the determination of any adverse industrial base effects, we reviewed documents, data, and interviewed officials from DSCA and the Bureau of Industry and Security (BIS) within the Commerce Department, that advises DSCA on industry effects of proposed EDA transfers. For purposes of this report, unless otherwise indicated, transfers refers to grants of EDA under the Foreign Assistance Act. We reviewed BIS policies and procedures related to the EDA program to identify the factors BIS considers in making adverse effect determinations. We also reviewed data generated by BIS to identify the extent to which the Humvee manufacturer objected to proposed transfers for the 7-year period included in our review. We also reviewed data provided by the Army on the number of Humvees procured for the Army’s use and for vehicles sold to foreign governments through the Foreign Military Sales program from fiscal years 2012 through 2018. To gain insight about DSCA and BIS’s approach to assess industrial base effects of proposed transfers, we selected two transfer requests as illustrative case studies: a 2016 transfer for Afghanistan which was the single largest proposed transfer and a 2016 transfer for Albania as it was the only proposed transfer that BIS did not sustain the manufacturer’s objection. We also spoke with representatives from AM General to obtain their perspectives on the EDA program and gain insight about the effect of EDA transfers on their business. We conducted this performance audit from February 2019 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Marie A. Mak, (202)-512-4841 or makm@gao.gov In addition to the contact name above, Candice Wright (Assistant Director) and Sameena Ismailjee (Analyst-in-Charge) managed this review. James McCully, Lorraine Ettaro, Phillip Farah, Stephanie Gustafson, Miranda Riemer, and Roxanna Sun made significant contributions to this report.
Why GAO Did This Study DOD can declare defense equipment as excess to U.S. military needs and make it available for transfer as a grant or sale to foreign governments. The Foreign Assistance Act of 1961 authorizes these transfers as grants provided that they do not adversely affect the U.S. national technology and industrial base, among other things. In this regard, transfers pursuant to the Act must not limit U.S. companies' ability to sell new or used defense equipment to countries requesting the transfer. The 2018 NDAA generally requires that Humvee transfers be modernized with a new powertrain and armor prior to being transferred. The Act also generally requires GAO to report on proposed and completed Humvee transfers and the process to determine if transfers will adversely affect the industrial base. This report provides information on (1) excess Humvees requested and approved during fiscal years 2012 through 2018 and (2) how the Humvee manufacturer's perspectives on the proposed transfers have been addressed by DOD as part of the determination of any adverse industrial base effects. GAO analyzed the latest DOD data on EDA Humvee transfers from fiscal years 2012 through 2018; reviewed DOD policies, guidance, and documents to gain insight into the process for determining industrial base effects of proposed transfers; and interviewed agency officials and Humvee manufacturer representatives. What GAO Found Excess High Mobility Multipurpose Wheeled Vehicles (HMMWV)—commonly pronounced Humvees—are among thousands of items that the Department of Defense (DOD) can transfer to foreign governments at their request through the Excess Defense Articles (EDA) program. Twenty-three countries, primarily from the Middle East and Africa, requested 16,005 Humvees for the 7-year period GAO reviewed. DOD approves such requests if it determines: excess U.S. inventory is available at the time of the request, the request aligns with U.S. foreign policy objectives, such as using the vehicles to help combat terrorism, and the U.S. industrial base will not be adversely affected by the transfer. For example, DOD approved a country's request for excess Humvees for border security, counter-smuggling, and counter-terrorism efforts. DOD approved nearly half of the total Humvees requested for fiscal years 2012 through 2018 (see figure). However, DOD has halted further approvals since the start of fiscal year 2017 due to concerns expressed by the Humvee manufacturer and language in the FY 2018 National Defense Authorization Act (2018 NDAA) and conference report that generally says Humvees must be modernized at no cost to DOD. GAO found that DOD considered the Humvee manufacturer's perspectives on proposed transfers and generally took steps to mitigate concerns about transfers that could siphon potential business from the manufacturer or compete with its sales efforts. Further, GAO found that generally, when the manufacturer objected to a transfer, the manufacturer withdrew its objection after receiving business opportunities to repair or upgrade vehicles for DOD or a requesting government's fleet. DOD officials also noted that most of the countries requesting Humvees through the EDA program find it cost-prohibitive to purchase new Humvees directly from the manufacturer. As a result, these countries rely on EDA Humvees to sustain their military fleets.
gao_GAO-19-357
gao_GAO-19-357_0
Background The LDA defines a lobbyist as an individual who is employed or retained by a client for compensation for services that include more than one lobbying contact (written or oral communication to covered officials, such as a high ranking agency official or a Member of Congress made on behalf of a client), and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who lobby on behalf of a client other than that person or entity. The LDA requires lobbyists to register with the Secretary of the Senate and the Clerk of the House, and to file quarterly reports disclosing their respective lobbying activities. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point. Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. No specific statutory requirements exist for lobbyists to generate or maintain documentation in support of the information disclosed in the reports they file. However, guidance issued by the Secretary of the Senate and the Clerk of the House recommends that lobbyists retain copies of their filings and documentation supporting reported income and expenses for at least 6 years after they file their reports. Figure 1 provides an overview of the registration and filing process. Lobbying firms are required to register with the Secretary of the Senate and the Clerk of the House for each client if the firms receive or expect to receive more than $3,000 in income from that client for lobbying activities. Lobbyists are also required to submit an LD-2 quarterly report for each registration filed. The LD-2s contain information that includes: the name of the lobbyist reporting on quarterly lobbying activities; the name of the client for whom the lobbyist lobbied; a list of individuals who acted as lobbyists on behalf of the client during the reporting period; whether any lobbyists served in covered positions in the executive or legislative branch, such as high-ranking agency officials or congressional staff positions, in the previous 20 years; codes describing general lobbying issue areas, such as agriculture and education; a description of the specific lobbying issues; houses of Congress and federal agencies lobbied during the reporting reported income (or expenses for organizations with in-house lobbyists) related to lobbying activities during the quarter (rounded to the nearest $10,000). The LDA requires lobbyists to report certain political contributions semiannually in the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each lobbying firm registered to lobby and by each individual listed as a lobbyist on a firm’s lobbying report. The lobbyists or lobbying firms must: list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which he or she contributed at least $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize an official who was previously in a covered position, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official, or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House, and that they have not provided, requested, or directed a gift or travel to a Member, officer, or employee of Congress that would violate those rules. The LDA also requires that the Secretary of the Senate and the Clerk of the House guide and assist lobbyists with the registration and reporting requirements and develop common standards, rules, and procedures for LDA compliance. The Secretary of the Senate and the Clerk of the House review the guidance annually. It was last revised January 31, 2017, to (among other issues), revise the registration threshold to reflect changes in the Consumer Price Index, and clarify the identification of clients and covered officials and issues related to rounding income and expenses. The guidance provides definitions of LDA terms, elaborates on registration and reporting requirements, includes specific examples of different disclosure scenarios, and provides explanations of why certain scenarios prompt or do not prompt disclosure under the LDA. The offices of the Secretary of the Senate and the Clerk of the House told us they continue to consider information we report on lobbying disclosure compliance when they periodically update the guidance. In addition, they told us they email registered lobbyists quarterly on common compliance issues and reminders to file reports by the due dates. The Secretary of the Senate and the Clerk of the House, along with USAO, are responsible for ensuring LDA compliance. The Secretary of the Senate and the Clerk of the House notify lobbyists or lobbying firms in writing that they are not complying with the LDA reporting. Subsequently, they refer those lobbyists who fail to provide an appropriate response to USAO. USAO researches these referrals and sends additional noncompliance notices to the lobbyists or lobbying firms, requesting that they file reports or terminate their registration. If USAO does not receive a response after 60 days, it decides whether to pursue a civil or criminal case against each noncompliant lobbyist. A civil case could lead to penalties up to $200,000 for each violation, while a criminal case—usually pursued if a lobbyist’s noncompliance is found to be knowing and corrupt—could lead to a maximum of 5 years in prison. Lobbyists Generally Demonstrate Compliance with Disclosure Requirements Lobbyists Filed Disclosure Reports as Required for Most New Lobbying Registrations Generally, under the LDA, within 45 days of being employed or retained to make a lobbying contact on behalf of a client, the lobbyist must register by first filing an LD-1 form with the Secretary of the Senate and the Clerk of the House. Thereafter, the lobbyist must file quarterly disclosure (LD-2) reports that detail the lobbying activities, including filing a first report for the quarter in which the lobbyist registered. Of the 3,618 new registrations we identified for the third and fourth quarters of 2017 and the first and second quarters of 2018, we matched 3,329 of them (92.01 percent) to corresponding LD-2 reports filed within the same quarter as the registration. These results are consistent with the findings we have reported in prior reviews. We used the House lobbyists’ disclosure database as the source of the reports. We also used an electronic matching algorithm that allows for misspellings and other minor inconsistencies between the registrations and reports. Figure 2 shows lobbyists filed disclosure reports as required for most new lobbying registrations from 2010 through 2018. As part of their regular enforcement procedures, the Clerk of the House and the Secretary of the Senate are to follow up with newly filed registrations where quarterly reports were not filed. If the Clerk of the House and the Secretary of the Senate are unsuccessful in bringing the lobbyist into compliance, they may refer those cases to USAO as described earlier in figure 1. For Most LD-2 Reports, Lobbyists Provided Documentation for Key Elements, Including Documentation for Their Income and Expenses For selected elements of lobbyists’ LD-2 reports that can be generalized to the population of lobbying reports, our findings have generally been consistent from year to year. Most lobbyists reporting $5,000 or more in income or expenses provided written documentation to varying degrees for the reporting elements in their disclosure reports. Figure 3 shows that for most LD-2 reports, lobbyists provided documentation for income and expenses for sampled reports from 2010 through 2018, and our 2018 estimate does not represent a statistically significant change from 2017. Figure 4 shows that in 2018, 10 percent of lobbyists’ reported income or expenses differed by $10,000 or more. Additionally, for some LD-2 reports, lobbyists did not round their income or expenses as the guidance requires. In 2018, we estimate 20 percent of reports did not round reported income or expenses according to the guidance. We have found that rounding difficulties have been a recurring issue on LD-2 reports from 2010 through 2018. As we previously reported, several lobbyists who listed expenses told us that based on their reading of the LD-2 form, they believed they were required to report the exact amount. While this is not consistent with the LDA and the guidance, this may be a source of some of the confusion regarding rounding errors. In 2016, the guidance was updated to include an additional example about rounding expenses to the nearest $10,000. The LDA requires lobbyists to disclose lobbying contacts made with federal agencies on behalf of the client for the reporting period. This year, of the 99 LD-2 reports in our sample, 46 reports disclosed lobbying activities at federal agencies. Of those, lobbyists provided documentation for all disclosed lobbying activities at the federal agencies for 29 LD-2 reports. Figure 5 shows that lobbyists for most LD-2 reports provided documentation for selected elements of their LD-2 reports that include general issue area codes for lobbying activities, lobbying the House and the Senate, and individual lobbyists listed from 2010 through 2018. In 2017 and 2018, there was an improvement of compliance with documentation for lobbying the House and the Senate over the previous 7 years. For Most Lobbying Disclosure Reports (LD-2), Lobbyists Filed Political Contribution Reports (LD- 203) for All Listed Lobbyists Figure 6 shows that lobbyists for most lobbying firms filed contribution reports as required in our sample from 2010 through 2018. All individual lobbyists and lobbying firms reporting lobbying activity are required to file political contribution (LD-203) reports semiannually, even if they have no contributions to report, because they must certify compliance with the gift and travel rules. For Some LD-2 Reports, Lobbyists May Have Failed to Disclose Previously Held Covered Positions The LDA requires a lobbyist to disclose previously held covered positions in the executive or legislative branch, such as high-ranking agency officials and congressional staff, when first registering as a lobbyist for a new client. This can be done either on a new LD-1 or on the quarterly LD- 2 filing when added as a new lobbyist. This year, we estimate that 19 percent of all LD-2 reports may not have properly disclosed previously held covered positions as required. As in our other reports, some lobbyists were still unclear about the need to disclose certain covered positions, such as paid congressional internships or certain executive agency positions. Figure 7 shows the extent to which lobbyists may not have properly disclosed one or more covered positions as required from 2010 through 2018. Some Lobbyists Amended Their Disclosure Reports after We Contacted Them Lobbyists amended 23 of the 99 LD-2 disclosure reports in our original sample to change previously reported information after we contacted them. Of the 23 reports, 10 were amended after we notified the lobbyists of our review, but before we met with them. An additional 13 of the 23 reports were amended after we met with the lobbyists to review their documentation. We consistently find a notable number of amended LD-2 reports in our sample each year following notification of our review. This suggests that sometimes our contact spurs lobbyists to more closely scrutinize their reports than they would have without our review. Table 1 lists reasons lobbying firms in our sample amended their LD-2 reports. Most LD-203 Contribution Reports Disclosed Political Contributions Listed in the Federal Election Commission Database As part of our review, we compared contributions listed on lobbyists’ and lobbying firms’ LD-203 reports against those political contributions reported in the Federal Election Commission (FEC) database to identify whether political contributions were omitted on LD-203 reports in our sample. The samples of LD-203 reports we reviewed contained 80 reports with contributions and 80 reports without contributions. We estimate that overall in 2018, lobbyists failed to disclose one or more reportable contributions on 33 percent of reports. Additionally, eight LD- 203 reports were amended in response to our review. Table 2 shows our results from 2010 to 2018; estimates in the table have a maximum margin of error of 11 percentage points. For this year’s review, the estimated change in the percent of LD-203 reports missing one or more FEC- reportable contributions was a statistically significant increase compared to each of the prior 9 years. Most Lobbying Firms Reported Some Level of Ease in Complying with Disclosure Requirements and Understood Lobbying Terms As part of our review, we conducted interviews with 97 different lobbying firms in the 2018 sample of LD-2 disclosure reports. Consistent with prior reviews, most lobbying firms reported that they found it “very easy” or “somewhat easy” to comply with reporting requirements. Of the 97 different lobbying firms interviewed, 24 reported that the disclosure requirements were “very easy,” 61 reported them “somewhat easy,” and 11 reported them “somewhat difficult” or “very difficult.” One lobbying firm did not respond to this question (see figure 8). Most lobbying firms we surveyed rated the definitions of terms used in LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. This is consistent with prior reviews. Figure 9 shows what lobbyists reported as their ease of understanding the terms associated with LD-2 reporting requirements from 2012 through 2018. The U.S. Attorney’s Office for the District of Columbia Continues to Enforce the LDA The U.S. Attorney’s Office Has Resources and Authorities to Enforce LDA Compliance The U.S. Attorney’s Office for the District of Columbia (USAO) officials stated that they continue to have sufficient personnel resources and authority under the LDA to enforce reporting requirements. This includes imposing civil or criminal penalties for noncompliance. Noncompliance refers to a lobbyist’s or lobbying firm’s failure to comply with the LDA. However, USAO noted that due to attrition the number of the assigned personnel has changed from 2017 as indicated in table 3. USAO officials stated that lobbyists resolve their noncompliance issues by filing LD-2, LD-203, or LD-2 amendments, or by terminating their registration, depending on the issue. Resolving referrals can take anywhere from a few days to years, depending on the circumstances. During this time, USAO creates summary reports from its database to track the overall number of referrals that are pending or become compliant as a result of the lobbyist receiving an email, phone call, or noncompliance letter. Referrals remain in the pending category until they are resolved. The pending category is divided into the following areas: “initial research for referral,” “responded but not compliant,” “no response /waiting for a response,” “bad address,” and “unable to locate.” USAO officials noted that they attempt to review and update all pending cases every six months. USAO focuses its enforcement efforts primarily on the “responded but not compliant” and the “no response/waiting for a response” groups. Officials told us that, if after several attempts, it cannot contact the noncompliant firm or its lobbyist, it confers with both the Secretary of the Senate and the Clerk of the House to determine whether further action is needed. In the cases where the lobbying firm is repeatedly referred for not filing disclosure reports but does not appear to be actively lobbying, USAO suspends enforcement actions. USAO officials reported they will continue to monitor these firms and will resume enforcement actions if required. Status of LD-2 Enforcement Efforts USAO received 3,798 referrals from both the Secretary of the Senate and the Clerk of the House for failure to comply with LD-2 reporting requirements cumulatively for filing years 2009 through 2018. Figure 10 shows the number and status of the referrals received, and the number of enforcement actions taken by USAO to bring lobbying firms into compliance. Enforcement actions include USAO attempts to bring lobbyists into compliance through letters, emails, and calls. About 40 percent (1,533 of 3,798) of the total referrals received are now compliant because lobbying firms either filed their reports or terminated their registrations. In addition, some of the referrals were found to be compliant when USAO received the referral, so no action was taken. This may occur when lobbying firms respond to the contact letters from the Secretary of the Senate and the Clerk of the House after USAO received the referrals. About 59 percent (2,250 of 3,798) of referrals are pending further action because USAO could not locate the lobbying firm, did not receive a response from the firm after an enforcement action, or plans to conduct additional research to determine if it can locate the lobbying firm. The remaining 15 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. Status of LD-203 Referrals LD-203 referrals consist of two types: (1) LD-203(R) referrals represent lobbying firms that have failed to file LD-203 reports for their lobbying firm; and (2) LD-203 referrals represent the lobbyists at the lobbying firm who have failed to file their individual LD-203 reports as required. USAO received 2,629 LD-203(R) referrals from lobbying firms (cumulatively from 2009 through 2018) and 5,897 LD-203 referrals for individual lobbyists (cumulatively from 2009 through 2017) from the Secretary of the Senate and the Clerk of the House for noncompliance with reporting requirements). LD-203 referrals are more complicated than LD-2 referrals because both the lobbying firm and the lobbyists within the firm are each required to file an LD-203. Lobbyists employed by a lobbying firm typically use the firm’s contact information and not the lobbyists’ personal contact information. This makes it difficult to locate a lobbyist who is not in compliance and may have left the firm. In 2018, USAO officials confirmed that, while many firms have assisted USAO by providing contact information for lobbyists, they are not required to do so. According to officials, USAO has difficulty pursuing LD-203 referrals for lobbyists who have departed a firm without leaving forwarding contact information with the firm. While USAO utilizes web searches and online databases, including social media, to find these missing lobbyists, it is not always successful. Figure 11 shows the status of LD-203(R) lobbying firm referrals received and the number of enforcement actions taken by USAO to bring lobbying firms into compliance. About 42 percent (1,093 of 2,629) of the lobbying firms referred by the Secretary of the Senate and the Clerk of the House for noncompliance from calendar years 2009 through 2018 are now considered compliant because firms either filed their reports or terminated their registrations. About 58 percent (1,523 of 2,629) of the referrals are pending further action. The remaining 13 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. USAO received 5,897 LD-203 individual lobbyists referrals from the Secretary of the Senate and the Clerk of the House for lobbyists who failed to comply with LD-203 reporting requirements for calendar years 2009 through 2017. Figure 12 shows the status of the referrals received and the number of enforcement actions taken by USAO to bring lobbyists into compliance. In addition, figure 12 shows that about 32 percent (1,880 of 5,897) of the lobbyists had come into compliance by filing their reports or no longer being registered as a lobbyist. About 68 percent (4,003 of 5,897) of the referrals are pending further action because USAO could not locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbyist. The remaining 14 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased. USAO received LD-203 referrals from the Secretary of the Senate and the Clerk of the House for 7,617 individual lobbyists who failed to comply with LD-203 reporting requirements for any filing year from 2009 through 2017. Figure 13 shows the status of compliance for individual lobbyists listed on referrals to USAO. About 36 percent (2,706 of 7,617) of the lobbyists had come into compliance by filing their reports or by not being registered as a lobbyist. About 65 percent (4,911 of 7,617) of the referrals are pending action because USAO could not locate the lobbyists, did not receive a response from the lobbyists, or plans to conduct additional research to determine if it can locate the lobbyists. USAO officials said that many of the pending LD-203 referrals represent lobbyists who no longer lobby for the lobbying firms affiliated with the referrals, even though these lobbying firms may be listed on the lobbyist’s LD-203 report. Status of Enforcement Settlement Actions According to USAO officials, lobbyists and lobbying firms who repeatedly fail to file reports are labeled chronic offenders and referred to one of the assigned attorneys for follow-up. USAO also receives complaints regarding lobbyists who are allegedly lobbying but never filed an LD-203. USAO officials added that USAO monitors and investigates chronic offenders to ultimately determine the appropriate enforcement actions, which may include settlement or other civil actions. Additionally, USAO officials reported that they are working to resolve an active case involving a chronic offender firm and lobbyist that was pending as of 2018. USAO officials noted that the agency is continuing settlement discussions with the company that failed to respond to required LDA violation notices and its lobbyist did not respond to individual violations for semiannual reporting. The company is now current on filing its reports and USAO is working with the Secretary of the Senate and the Clerk of the House on settling past violations. USAO continues to review its records to identify additional chronic offenders for further action due to noncompliance. Agency Comments We provided a draft of this report to the Department of Justice for review and comment. The Department of Justice did not have comments. We are sending copies of this report to the Attorney General, the Secretary of the Senate, the Clerk of the House of Representatives, and interested congressional committees and members. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2717 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: List of Lobbyists and Clients for Sampled Lobbying Disclosure Reports The random sample of lobbying disclosure reports we selected was based on unique combination of House ID, lobbyist, and client names (see table 4). Appendix II: List of Sampled Lobbying Contribution Reports with and without Contributions Listed Lobbyist or lobbying firm Jessica Woolley National Multifamily Housing Council, Inc. Appendix III: Objectives, Scope, and Methodology Our objectives were to determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA) by providing documentation (1) to support information contained on registrations and reports filed under the LDA; (2) to identify challenges or potential improvements to compliance, if any; and (3) to describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO), its role in enforcing LDA compliance, and any efforts it has made to improve LDA enforcement. We used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives (Clerk of the House). To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and consulted with knowledgeable officials. Although registrations and reports are filed through a single web portal, each chamber subsequently receives copies of the data and follows different data-cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases caused by the differences in data processing. For example, Senate staff told us during previous reviews they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database. As a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance. House staff told us during previous reviews that they rely heavily on automated processing. In addition, while they manually review reports that do not perfectly match information on file for a given lobbyist or client, staff members approve and upload such reports as originally filed by each lobbyist, even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reason to believe that the content of the Senate and House systems would vary substantially. Based on interviews with knowledgeable officials and a review of documentation, we determined that House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure reports (LD-2) and for assessing whether newly filed lobbyists also filed required reports. We used the House database for sampling LD-2 reports from the third and fourth quarters of 2017 and the first and second quarters of 2018, as well as for sampling year-end 2017 and midyear 2018 political contributions reports (LD-203). We also used the database for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process. However, we did consult with officials from each office. They provided us with general background information at our request. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 99 LD-2 reports from the third and fourth quarters of 2017 and the first and second quarters of 2018. We excluded reports with no lobbying activity or with income or expenses of less than $5,000 from our sampling frame. We drew our sample from 49,918 activity reports filed for the third and fourth quarters of 2017 and the first and second quarters of 2018 available in the public House database, as of our final download date for each quarter. Our sample of LD-2 reports was not designed to detect differences over time. However, we conducted tests of significance for changes from 2010 to 2018 for the generalizable elements of our review. We found that results were generally consistent from year to year and there were few statistically significant changes (as noted in our report) after using a Bonferroni adjustment to account for multiple comparisons. For this year’s review, we estimated that 97 percent of LD-2 reports provided written documentation for the lobbying income and expenses. Our sample is based on a stratified random selection and is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval. This interval would contain the actual population value for 95 percent of the samples that we could have drawn. The percentage estimates for LD-2 reports have 95-percent confidence intervals of within plus or minus 12 percentage points or fewer of the estimate itself. We contacted all the lobbyists and lobbying firms in our sample and, using a structured web-based survey, asked them to confirm key elements of the LD-2 and whether they could provide written documentation for key elements in their reports, including the amount of income reported for lobbying activities; the amount of expenses reported on lobbying activities; the names of those lobbyists listed in the report; the houses of Congress and the federal agencies that they lobbied, and the issue codes listed to describe their lobbying activity. After reviewing the survey results for completeness, we interviewed lobbyists and lobbying firms to review the documentation they reported as having on their online survey for selected elements of their respective LD- 2 report. Prior to each interview, we conducted a search to determine whether lobbyists properly disclosed their covered position as required by the LDA. We reviewed the lobbyists’ previous work histories by searching lobbying firms’ websites, LinkedIn, Leadership Directories, Legistorm, and Google. Prior to 2008, lobbyists were only required to disclose covered official positions held within 2 years of registering as a lobbyist for the client. The Honest Leadership and Open Government Act of 2007 amended that time frame to require disclosure of positions held 20 years before the date the lobbyists first lobbied on behalf of the client. Lobbyists are required to disclose previously held covered official positions either on the client registration (LD-1) or on an LD-2 report. Consequently, those who held covered official positions may have disclosed the information on the LD-1 or a LD-2 report filed prior to the report we examined as part of our random sample. Therefore, where we found evidence that a lobbyist previously held a covered official position, and that information was not disclosed on the LD-2 report under review, we conducted an additional review of the publicly available Secretary of the Senate or Clerk of the House database to determine whether the lobbyist properly disclosed the covered official position on a prior report or LD-1. Finally, if a lobbyist appeared to hold a covered position that was not disclosed, we asked for an explanation at the interview with the lobbying firm to ensure that our research was accurate. In previous reports, we reported the lower bound of a 90-percent confidence interval to provide a minimum estimate of omitted covered positions and omitted contributions with a 95-percent confidence level. We did so to account for the possibility that our searches may have failed to identify all possible omitted covered positions and contributions. As we have developed our methodology over time, we are more confident in the comprehensiveness of our searches for these items. Accordingly, this report presents the estimated percentages for omitted contributions and omitted covered positions rather than the minimum estimates. As a result, percentage estimates for these items will differ slightly from the minimum percentage estimates presented in prior reports. In addition to examining the content of the LD-2 reports, we confirmed whether the most recent LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for lobbyists to file a report in the quarter of registration was met for the third and fourth quarters of 2017 and the first and second quarters of 2018, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using an electronic matching algorithm that includes strict and loose text matching procedures, we identified matching disclosure reports for 3,329, or 92.01 percent, of the 3,618 newly filed registrations. We began by standardizing client and lobbyist names in both the report and registration files (including removing punctuation and standardizing words and abbreviations, such as “company” and “CO”). We then matched reports and registrations using the House identification number (which is linked to a unique lobbyist-client pair), as well as the names of the lobbyist and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and lobbyist name, allowing for variations in the names to accommodate minor misspellings or typos. For these cases, we used professional judgment to determine whether cases with typos were sufficiently similar to consider as matches. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed stratified random samples of LD-203 reports from the 29,798 total LD-203 reports. The first sample contains 80 reports of the 9,502 reports with political contributions and the second contains 80 reports of the 20,296 reports listing no contributions. Each sample contains 40 reports from the year- end 2017 filing period and 40 reports from the midyear 2018 filing period. The samples from 2018 allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95-percent confidence interval of within plus or minus 11 percentage points or fewer. Although our sample of LD- 203 reports was not designed to detect differences over time, for this year’s review, the estimated change in percentage of LD-203 reports missing one or more reportable contributions was a statistically significant increase compared to each of the prior 9 years. While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and designed only for cross- sectional analysis. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. We consulted with staff at FEC responsible for administering the database. We determined that the data are sufficiently reliable for the purposes of our reporting objectives. We compared the FEC-reportable contributions on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures so we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203 report, we asked the lobbyists or organizations to explain why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. We did not estimate the percentage of other non-FEC political contributions that were omitted because they tend to constitute a small minority of all listed contributions and cannot be verified against an external source. To identify challenges to compliance, we used a structured web-based survey and obtained the views from 97 different lobbying firms included in our sample on any challenges to compliance. The number of different lobbying firms is 97, which is less than our original sample of 99 reports because some lobbying firms had more than one LD-2 report included in our sample. We calculated responses based on the number of different lobbying firms that we contacted rather than the number of interviews. Prior to our calculations, we removed the duplicate lobbying firms based on the most recent date of their responses. For those cases with the same response date, the decision rule was to keep the cases with the smallest assigned case identification number. To obtain their views, we asked them to rate their ease with complying with the LD-2 disclosure requirements using a scale of “very easy,” “somewhat easy,” “somewhat difficult,” or “very difficult.” In addition, using the same scale, we asked them to rate the ease of understanding the terms associated with LD-2 reporting requirements. To describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO) and its efforts to improve its LDA enforcement, we interviewed USAO officials. We obtained information on the capabilities of the system officials established to track and report compliance trends and referrals and on other practices established to focus resources on LDA enforcement. USAO provided us with reports from the tracking system on the number and status of referrals and chronically noncompliant lobbyists and lobbying firms. The mandate does not require us to identify lobbyists who failed to register and report in accordance with the LDA requirements, or determine for those lobbyists who did register and report whether all lobbying activity or contributions were disclosed. Therefore, this was outside the scope of our audit. We conducted this performance audit from May 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Clifton G. Douglas Jr. (Assistant Director), Shirley Jones (Managing Associate General Counsel), Ulyana Panchishin (Analyst-in-Charge), James Ashley, Krista Loose, Kathleen Jones, Amanda Miller, Sharon Miller, Robert Robinson, Stewart W. Small, Peter Verchinski, and Khristi Wilkins made key contributions to this report. Assisting with lobbyist file reviews were Adam Brooks, Jazzmin R. Cooper, Colleen Corcoran, Rianna B. Jansen, Benjamin Legow, Regina Morrison, Andrew Olson, Amanda R. Prichard, Alan Rozzi, Bryan Sakakeeny, Kate Wulff, and Edith P. Yuh. Related GAO Products Lobbying Disclosure: Observations on Lobbyists’ Compliance with New Disclosure Requirements. GAO-08-1099. Washington, D.C: September 30, 2008. 2008 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-09-487. Washington, D.C: April 1, 2009. 2009 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-10-499. Washington, D.C: April 1, 2010. 2010 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-11-452. Washington, D.C: April 1, 2011. 2011 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-12-492. Washington, D.C: March 30, 2012. 2012 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-13-437. Washington, D.C: April 1, 2013. 2013 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-14-485. Washington, D.C: May 28, 2014. 2014 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-15-310. Washington, D.C.: March 26, 2015. 2015 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-16-320. Washington, D.C.: March 24, 2016. 2016 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-17-385. Washington, D.C.: March 31, 2017. 2017 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-18-388, Washington, D.C.: March 30, 2018.
Why GAO Did This Study The LDA, as amended, requires lobbyists to file quarterly disclosure reports and semiannual reports on certain political contributions. The law also includes a provision for GAO to annually audit lobbyists' compliance with the LDA. GAO's objectives were to (1) determine the extent to which lobbyists can demonstrate compliance with disclosure requirements; (2) identify any challenges or potential improvements to compliance that lobbyists report; and (3) describe the resources and authorities available to USAO in its role in enforcing LDA compliance. This is GAO's 12th annual report under the provision. GAO reviewed a stratified random sample of 99 quarterly disclosure LD-2 reports filed for the third and fourth quarters of calendar year 2017, and the first and second quarters of calendar year 2018. GAO also reviewed two random samples totaling 160 LD-203 reports from year-end 2017 and midyear 2018. This methodology allowed GAO to generalize to the population of 49,918 disclosure reports with $5,000 or more in lobbying activity, and 29,798 reports of federal political campaign contributions. GAO also interviewed USAO officials. GAO is not making any recommendations in this report. GAO provided a draft of this report to the Department of Justice for review and comment. The agency stated that it did not have comments. What GAO Found For the 2018 reporting period, most lobbyists provided documentation for key elements of their disclosure reports to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA). For lobbying disclosure (LD-2) reports and political contributions (LD-203) reports filed during the third and fourth quarter of 2017 and the first and second quarter of 2018, GAO estimates that 92 percent of lobbyists who filed new registrations also filed LD-2 reports as required for the quarter in which they first registered (the figure below describes the filing process and enforcement); 97 percent of all lobbyists who filed could provide documentation for lobbying income and expenses. However, an estimated 20 percent of these LD-2 reports were not properly rounded to the nearest $10,000; 19 percent of all LD-2 reports did not properly disclose one or more previously held covered positions as required; and 33 percent of LD-203 reports were missing reportable contributions, which was a statistically significant increase compared to prior years. Except as noted above, these findings are generally consistent with prior reports GAO issued from 2010 through 2017. GAO continues to find that most lobbyists in the sample reported some level of ease in complying with disclosure requirements and in understanding the definitions of terms used in the reporting. However, some disclosure reports demonstrate compliance difficulties, such as failure to disclose covered positions or misreporting of income or expenses. The U.S. Attorney's Office for the District of Columbia (USAO) stated it has sufficient resources to enforce compliance. USAO continued its efforts to resolve noncompliance through filing reports or terminating registrations, as well as imposing civil and criminal penalties.
gao_GAO-19-517
gao_GAO-19-517_0
Background Established by the National Housing Act, FHA’s single-family mortgage insurance program helps home buyers obtain financing by providing insurance on single-family mortgage loans. The mortgage insurance allows FHA-approved private lenders to provide qualified borrowers with mortgages on properties with one to four housing units and generally compensates lenders for nearly all the losses incurred on such loans. To support the program, FHA imposes up-front and annual mortgage insurance premiums on FHA borrowers. The agency has played a particularly large role among first-time, minority, and low-income home buyers. For example, in fiscal year 2017, about 82 percent of FHA- insured home purchase loans went to first-time home buyers and more than 33 percent went to minority home buyers. Foreclosure Mitigation and Property Disposition Methods FHA requires servicers to undertake certain home retention and foreclosure mitigation actions to help delinquent homeowners catch up on late mortgage payments. Before initiating foreclosure actions, FHA requires servicers to contact the borrower, collect information on the borrower’s finances, and attempt informal methods of resolving the delinquency. If informal steps are not appropriate for a borrower’s circumstances, the servicer evaluates the borrower for a series of home retention actions, which include a formal forbearance and repayment plan and a loan modification. Under certain circumstances, the servicer may consider a foreclosure mitigation option, such as a preforeclosure (short) sale or a deed-in-lieu of foreclosure. If the home retention and foreclosure mitigation actions are unsuccessful, the servicer or mortgage note holder is generally entitled to pursue foreclosure to obtain title to the property. The foreclosure process is governed by state laws, but foreclosed properties are typically auctioned at a foreclosure sale. Most foreclosed properties are disposed of in one of two ways. Claims without Conveyance of Title (CWCOT). Through FHA’s CWCOT program, the servicer attempts to secure a third-party purchase of an eligible property for an adjusted fair market value that is less than the amount of the servicer’s projected claim. Conveyance. If the foreclosure process is completed and no third party purchases the home at the foreclosure sale, the home usually becomes the property of the servicer. Servicers convey these properties to FHA, which sells them out of its REO inventory. During the default and foreclosure process, servicers must meet two FHA time requirements. The first requires servicers to initiate a foreclosure (first legal action) or utilize a loss mitigation option within 6 months of borrower default. The second requirement, for the “reasonable diligence” period, requires servicers to obtain good and marketable title and possession of a property within a specified time frame that varies by state. The servicer secures the property and obtains possession once the property is vacant. Servicers are subject to financial penalties for missing these deadlines. In both cases, servicers must curtail the debenture interest that they otherwise would be entitled to collect from the date of the missed time frame. Servicers are responsible for maintaining vacant foreclosed properties in accordance with FHA requirements, which specify allowable reimbursable amounts to preserve and protect the property. A servicer needing additional funds to complete the required maintenance must submit an “overallowable” request to FHA. Property Conveyance Process When a servicer forecloses on a property with an FHA-insured mortgage and the property is not sold to a third party through CWCOT, the property is held in the servicer’s name until the servicer conveys the title to FHA. As seen in figure 1, these properties span a range of home types and ages. FHA requires servicers to preserve and protect the property and ensure it meets FHA’s conveyance condition standards before conveying title. FHA’s preservation and protection requirements include a number of specific steps for securing, maintaining, and repairing properties and documenting property conditions. FHA reimburses the servicer for up to $5,000 per property for required work, and the servicer may request overallowable funds if needed. FHA’s conveyance condition standards are broader requirements, including that a property be undamaged by natural disaster and in “broom swept” condition, have all damage covered by hazard insurance repaired, and be undamaged by the servicer’s failure to properly secure or maintain the property. HUD regulations state that the servicer must obtain good and marketable title and convey the property to HUD within 30 days of the date on which the servicer filed the foreclosure deed for record or certain other key dates, whichever is later. If a servicer does not believe it will be able to convey the property by this time, it may request an extension from FHA. The servicer files an insurance claim with FHA when it conveys the title. If a servicer does not convey the property within the required time frame and has not received an approved extension, the servicer must curtail the debenture interest and property preservation and protection expenses it claims as of the date of the missed deadline. Shortly after conveyance, FHA pays the Part A claim to the servicer, which includes the unpaid principal balance and debenture interest on the insured mortgage. At this time, FHA becomes responsible for maintaining the property until it is sold. FHA pays the part of the claim that covers eligible property preservation and protection expenses incurred by the servicer once the servicer submits title evidence and documentation of expenses (in Part B of the claim form). FHA inspects the property and reviews title evidence before selling the property out of its REO inventory. In some cases, FHA reconveys the title to the servicer if it finds the servicer did not comply with requirements related to property condition or title. When a property is reconveyed, FHA reassigns the title to the servicer and requests repayment of the claim amount. The servicer then must correct any title or property condition issues before it may convey the property and submit a claim to FHA again. Throughout this process, FHA and servicers use the P260 Asset Disposition and Management System (asset disposition system) to communicate and upload documentation about the properties. FHA articulates its property preservation and protection requirements and conveyance condition standards in a mortgagee letter and policy handbook that we refer to collectively as FHA’s conveyance condition policies and procedures. From 2010 through 2017, servicers conveyed about 610,000 properties to FHA. The number of properties conveyed annually peaked in 2012 at about 111,000 (see fig. 2). In 2017, servicers conveyed fewer than 32,000 properties to FHA. The decline in recent years is consistent with improvements in the housing market since the 2007–2011 housing crisis. Roles and Responsibilities in the Conveyance Process In this report, we define FHA’s property conveyance process as beginning when the servicer both obtains good and marketable title and takes possession of a property and ending when FHA assigns a marketing contractor to sell the property out of its REO inventory (see fig. 3). Several FHA contractors and offices play key roles in the conveyance process. Compliance contractor. A nationwide compliance contractor called the mortgagee compliance manager is responsible for protecting FHA’s interests in properties conveyed to FHA and communicates directly with servicers about the properties. The compliance contractor reviews property inspections to ensure properties meet conveyance condition standards, reviews requests from servicers for extensions of conveyance times or for overallowable expenses, reviews servicer claims for compliance with requirements, and responds to servicer inquiries about pre- and postconveyance responsibilities. The compliance contractor is located in Oklahoma City, Oklahoma, and is overseen by FHA’s National Servicing Center. Maintenance contractor. Maintenance contractors, called field service managers, are responsible for inspecting properties recently conveyed to FHA and preserving properties in FHA’s REO inventory. FHA has multiple maintenance contractors; they are responsible for properties in different regions. Upon conveyance, the maintenance contractor conducts a comprehensive property inspection to determine if the property meets conveyance condition standards and completes the HUD Property Inspection Report. The maintenance contractor also conducts other inspections at a property before conveyance, as warranted, including a preconveyance inspection at the request of the servicer and an overallowable inspection if requested by the compliance contractor. While these contractors conduct general maintenance on the property, they typically do not make major repairs, because FHA generally sells conveyed properties in as-is condition. Marketing contractor. Marketing contractors, called asset managers, are responsible for marketing and selling the homes in FHA’s REO inventory. FHA homeownership centers. FHA carries out its mortgage insurance and REO disposition programs through four regional offices called homeownership centers (HOC). The centers are located in Atlanta, Georgia; Denver, Colorado; Philadelphia, Pennsylvania; and Santa Ana, California. Officials in each HOC are responsible for overseeing the maintenance and marketing contractors for their region and reviewing HUD Property Inspection Reports to determine if conveyed properties should be reconveyed to the servicer due to condition issues. This determination is then forwarded to the compliance contractor for an additional review. HUD’s Office of Finance and Budget. Staff from this office are responsible for reviewing servicer mortgage insurance claims for compliance with FHA requirements. The office selects a sample of claims from the past 3 years to review whether the property preservation and protection expenses were within allowable limits and whether the servicer curtailed debenture interest and property preservation and protection expenses accurately, among other things. Other Participants in the Mortgage Market A number of other federal and federally sponsored entities participate in the mortgage market. Along with FHA, the Department of Veterans Affairs and the Department of Agriculture operate programs that guarantee single-family mortgages made by private lenders. Additionally, two government-sponsored enterprises—Fannie Mae and Freddie Mac (enterprises)—purchase and securitize single-family mortgages. However, the property disposition programs for these entities are not directly analogous to FHA’s. In contrast to FHA, the Department of Veterans Affairs and the enterprises take custody of and are responsible for properties closer to the time of the foreclosure sale. The enterprises require servicers to convey properties to them within 24 hours of foreclosure sale or deed-in-lieu of foreclosure, while the Department of Veterans Affairs requires servicers to provide notice of their intent to convey properties within 15 days of foreclosure sale. Also in contrast to FHA, the Department of Agriculture does not take possession of foreclosed properties with guaranteed loans, but rather oversees their disposition by lenders. In FHA’s case, properties are often held in the lender’s or servicer’s name for an extended period after the foreclosure sale. Following the foreclosure sale, FHA requires servicers to oversee properties during redemption periods, to evict residents if properties not in redemption periods are occupied, and to continue property preservation and protection activities. In addition, before conveyance, servicers must identify and pay any homeowners association (HOA) fees and utility bills that are due. As described in figure 3, servicers also must make any required repairs, meet other conveyance requirements, and pass an FHA property inspection, or face the prospect of having the property reconveyed. FHA officials said this approach reduces FHA’s holding time and costs and that the agency does not have the infrastructure to manage and fund property repairs itself. FHA’s Property Conveyance Process Often Takes a Long Time, and Servicers and Contractors Performance against Time Requirements Varied Conveyance Times Increased after 2011, Partly Due to Greater Use of Other Disposition Methods and Extended Default and Foreclosure Periods Data on Time Frames for Conveyance and Reconveyance From July 2010 through December 2017, the property conveyance process took a median of 70 days, but this figure varied widely by year. Our analysis of FHA data found that, from 2011 through 2015, the median number of days to complete the conveyance process increased four-fold (from 41 to 161 days) and varied more widely around the median each successive year (see fig. 4). Conveyance time frames declined substantially in 2016 and 2017 (to a median of 137 days and 112 days, respectively) while continuing to vary considerably around the median. In comparison, FHA officials said the conveyance process generally should take about 37 days to complete—30 days for servicers to make necessary repairs and convey title to FHA and 7 days for FHA to inspect the property, communicate any condition issues identified during the inspection, and assign a marketing contractor to promote and sell it. We also found that the time it took properties to complete the conveyance process varied by HOC region. For the entire July 2010–December 2017 period, the Philadelphia HOC had the highest median time frame (91 days) and the Atlanta HOC the lowest (56 days). The Santa Ana and Denver HOCs had medians of 78 and 67 days, respectively. A number of factors may have contributed to differences among the HOCs, such as the number of properties conveyed in each region (which can affect servicer and HOC capacity) and the age of the housing stock (which can affect the time needed to make repairs). The time to complete the conveyance process includes, when applicable, the time needed for FHA to reconvey a property—that is, transfer ownership to the servicer due to condition or title issues—and for the servicer to convey it to FHA a second time. FHA officials said they try to avoid reconveyances because they prolong the conveyance process and result in FHA incurring additional preservation and protection costs. Figure 5 shows examples of condition issues at properties we visited in the Baltimore, Maryland, and Atlanta, Georgia, metropolitan areas that were in the reconveyance process. Our analysis of FHA data found that reconveyances were not common enough to significantly affect median conveyance time frames, but substantially lengthened the conveyance process when they did occur. As shown in table 1, servicers conveyed 406,863 properties to FHA from 2012 through 2017—the period within our scope for which FHA had reliable reconveyance data. In comparison, FHA reconveyed 8,874 properties to servicers during that time frame. The annual number of reconveyances rose from 1,019 in 2012 to 1,935 in 2015, before declining to 1,099 in 2017. We also found that the median time to complete FHA’s conveyance process in 2012–2017 was more than 614 days longer for reconveyed properties than the median for properties not reconveyed. However, the difference between the medians declined over time, dropping from 777 days in 2012 to 267 days in 2017 (see fig. 6). Servicers and FHA must take several steps to complete the conveyance process for reconveyed properties, which may account for some of the length of the time frames. Once the compliance contractor has notified the servicer that a property has condition issues that must be resolved to avoid reconveyance, the servicer may appeal. FHA officials said appeals can add up to 120 days to the conveyance process. If the servicer is unable to resolve the issues and the appeals are denied, FHA reconveys the property and the servicer must reimburse FHA for the original claim amount. The servicer then must complete any required repairs, resolve any title issues, prepare a new evidence package for FHA showing that condition and title issues were addressed, and submit a request to FHA’s compliance contractor to convey the property again. FHA’s compliance contractor then has 10 business days to review the evidence package and notify the servicer of its decision. Once conveyance is approved, the servicer may resubmit a new mortgage insurance claim form and evidence that the property deed has been filed in FHA’s name. Factors Likely Contributing to Increased Length of Conveyance Process Two factors that likely contributed to the increase in the time to complete FHA’s conveyance process are increased use of other disposition methods and property damage stemming from extended default and foreclosure periods. Increased use of third-party sales. FHA data indicate that from 2010 through 2017 servicers increasingly disposed of properties through third- party sales using the CWCOT program. As previously noted, in 2014 FHA began requiring servicers to offer all eligible properties for sale through CWCOT before using the conveyance process. According to our analysis of FHA property disposition data, in fiscal years 2010–2017, the share of properties disposed of through CWCOT rose from about 1.4 percent to almost 44 percent, while the share of conveyance and REO sales dropped from about 84 percent to 42 percent (see fig. 7). The remaining properties were disposed of through notes sales or preforeclosure sales. Increased use of CWCOT may have extended property conveyance time frames for two reasons. First, servicers must attempt to sell all eligible properties through CWCOT while simultaneously preparing them for conveyance, which may add additional time to the conveyance process according to FHA officials. Second, properties conveyed to FHA because they are not eligible for or sold through the CWCOT program are generally in poorer condition and require more repairs, according to servicer representatives. This may contribute to extended conveyance time frames. For example, a representative from one mortgage industry group told us that properties ineligible for CWCOT and conveyed to FHA generally require more than the $5,000 in preservation and protection costs that FHA allows. In these cases, servicers may request additional funds from the compliance contractor, but processing the requests may prolong the conveyance process, as discussed later in this report. Representatives from one servicer and two mortgage industry groups stated they prefer the CWCOT program because it reduces the need to convey properties. They said the conveyance process is costly and comes with the risk of reconveyance. FHA data show that REO sales generally had higher loss severity rates (the financial loss on a defaulted loan as a percentage of the unpaid principal balance) than properties disposed of through alternative methods, including the CWCOT program. For example, for the last quarter of fiscal year 2017, FHA reported that the loss severity rate for properties sold through REO was 54.8 percent, while the combined loss severity rate for properties disposed of through alternative methods was 43.8 percent. However, some of this difference may be attributable to the poorer condition of conveyed properties, as discussed previously. National Mortgage Settlement In February 2012, the Department of Justice and 49 states settled with the five largest mortgage servicers— Ally Financial, Inc. (formerly GMAC), Bank of America Corporation, Citigroup Inc., J.P. Morgan Chase & Co., and Wells Fargo & Company — to address mortgage servicing, foreclosure, and bankruptcy abuses. The agreement settled state and federal investigations finding that these servicers routinely signed foreclosure-related documents without verifying their validity and without the presence of a notary public—a practice known as “robosigning.” Extended default and foreclosure periods. According to FHA officials, properties with long default and foreclosure periods may be in poor condition because they deteriorate if servicers delay property maintenance and repairs. FHA officials said this was common for properties conveyed to FHA after the 2012 National Mortgage Settlement because some servicers delayed foreclosure proceedings to limit their exposure to litigation in 2010 and 2011 (see sidebar). FHA officials said that after the Department of Justice issued the National Mortgage Settlement in February 2012, servicers who had been delaying default and foreclosure started conveying large numbers of properties. According to FHA and servicer representatives, damaged properties can take longer to convey because they require extensive repairs to meet FHA’s conveyance condition standards. The results of our analysis of FHA data are broadly consistent with these observations. The number of properties conveyed to FHA increased by 31 percent (from 84,363 to 110,567) between 2011 and 2012, the year of the settlement. Additionally, the default and foreclosure period for conveyed properties (the time between the borrower defaulting on the mortgage and the servicer obtaining title to and possession of the property) increased over most of the July 2010–December 2017 time frame. As shown in figure 8, the median default and foreclosure period was 416 days for properties conveyed in July–December 2010, peaked at 664 days (about 60 percent higher) for properties conveyed in 2015, and fell to 612 days for 2017 conveyances. The overall upward trend was even more pronounced for properties with default and foreclosure periods at the 75th percentile. The 75th percentile was 555 days for properties conveyed from July through December 2010, peaked at 1,152 days (about 108 percent higher) for properties conveyed in 2016, and declined to 1,068 days for 2017 conveyances. Certain regulatory and policy changes also may have increased the default and foreclosure periods since 2013. HUD issued a mortgagee letter in 2013 that increased the reasonable diligence time frames and allowed servicers additional time to complete foreclosures in certain states. For example, the reasonable diligence time frame for properties in New York increased from 13 to19 months. Also, in 2014 mortgage servicing rules issued by the Consumer Financial Protection Bureau went into effect that restricted servicers’ ability to initiate a foreclosure and gave borrowers additional time to pursue loss mitigation options. Specifically, servicers may not initiate foreclosure proceedings if a borrowers’ application is pending for a loan modification or other alternatives to foreclosure. In addition, we found that properties with longer default and foreclosure periods generally took longer to complete the conveyance process than properties with shorter default and foreclosure periods (see fig. 9). Specifically, from July 2010 through December 2017 properties with the longest default and foreclosure periods—those in the highest quartile— took 93 days at the median to complete the conveyance process and 238 days at the 75th percentile. In comparison, properties with the shortest default and foreclosure periods—those in the lowest quartile—took 57 days at the median to complete the conveyance process and 136 days at the 75th percentile for that same period. As previously stated, FHA officials told us that properties with long default and foreclosure periods may have deteriorated if servicers were not maintaining them. These properties may have required additional repairs to bring them into conveyance condition. As previously noted, overall conveyance time frames declined in 2016 and 2017 from their peak in 2015. FHA officials attributed this improvement largely to the decreasing number of conveyances affected by the National Mortgage Settlement. As discussed earlier, the settlement contributed to a wave of properties that took a long time to convey, potentially due to damage sustained during extended default and foreclosure periods. FHA officials also indicated that the improved housing market in recent years has resulted in fewer foreclosures and, therefore, fewer property conveyances to FHA. Consequently, servicers and contractors may be better able to manage the workload associated with property conveyances and complete the process more quickly. Servicers Often Did Not Convey Properties within the Required Time Frame, but Usually Provided Title Evidence on Time From July 2010 through December 2017, servicers generally did not convey properties to FHA within the regulatory 30-day time frame (preconveyance period). During the preconveyance period, servicers must ensure the property has good and marketable title, conduct routine inspections and maintenance on the property, and ensure the property meets conveyance condition standards. If servicers do not believe they will be able to convey a property within 30 days, they may request an extension. The median number of days servicers took to complete the preconveyance period increased from 31 in July–December 2010 to 140 in 2015 (see fig. 10). This figure declined after 2015, dropping to 101 days in 2017. Variation around the median was considerable, especially in more recent years. For example, in 2017 the time to complete the preconveyance period was 43 days at the 25th percentile, compared with 268 days at the 75th percentile. The percentage of properties for which servicers did not convey in 30 days plus any approved extension grew from about 31 percent in July– December 2010 to about 72 percent in 2017. For the entire period from July 2010 through December 2017, the corresponding percentage was 55 percent. Representatives of 13 of the 20 servicers we interviewed said that meeting the 30-day timeline was one of their top challenges with the conveyance process. Representatives of servicers and mortgage industry groups cited several reasons for servicers needing additional time to convey. For example, representatives of 11 servicers cited the heavily damaged condition of the properties they acquired as one of the primary reasons for not conveying properties within 30 days. Servicer representatives also noted other reasons, including four who cited waiting for responses on hazard insurance claims and five who cited difficulty in obtaining HOA bills to pay. In addition, representatives of two mortgage industry groups and three servicers told us that meeting all the conveyance and title requirements simultaneously is a major challenge. For example, representatives of one mortgage industry group said a servicer may have completed required property repairs and paid HOA fees and utility bills, but if the property were subsequently vandalized, the servicer would have to delay conveyance to complete repairs. By that point, the servicer might no longer be current on HOA and utility payments. Servicers have the option to request an extension to the preconveyance time frame if they think they will be unable to convey a property in 30 days. Servicers requested a conveyance extension for about 40 percent of the properties conveyed from July 2010 through December 2017. FHA approved the extensions in about 40 percent of these cases. In addition, representatives from six of the 20 servicers we interviewed said FHA’s process for reviewing servicers’ overallowable requests (additional funds needed to complete work) negatively affected their ability to convey properties in 30 days. Once a servicer makes an overallowable request, FHA’s compliance contractor has 5 business days to review it and either reject the request or approve all or some of the requested amount. (We discuss the compliance contractor’s ability to meet this and other time requirements in the following section.) Servicers may appeal any rejections, in which case the compliance contractor has 3 business days to make a final determination. Six servicer representatives said that the time it takes the compliance contractor to make overallowable decisions may cause them to exceed the 30-day time frame, especially when they submit multiple requests for the same property. For context, our analysis of FHA data found that in 2017, the median number of servicer overallowable requests per property was 13, and the median number of appeals per property was six. In contrast to the preconveyance requirement, servicers usually met the time requirement for giving title evidence to FHA. Title evidence includes documentation that FHA is the legal owner of the property, including a copy of the mortgage documentation, a legal description of the property, and a copy of the recorded deed in FHA’s name. Servicers may provide title evidence to FHA at any point during the conveyance process up to 45 days after filing the deed. If servicers believe they will be unable to provide title evidence within 45 days, they may submit an extension request to the compliance contractor. According to FHA data, servicers were able to provide title evidence within 45 days plus any approved extension for 84 percent of properties conveyed from July 2010 through December 2017. FHA Contractors Involved in Property Conveyances Largely Met Time Requirements FHA’s compliance and maintenance contractors generally met the required time frames for key conveyance tasks for properties conveyed from 2011 through 2017. However, when the contractors did not meet their required time frames, the delays may have lengthened the time to complete the conveyance process for some properties. Compliance contractor. FHA established a time frame of 5 business days for the compliance contractor to conduct various reviews in the pre- and postconveyance periods. In the preconveyance period, the compliance contractor reviews requests for overallowables, conveyance extensions, and conveyance of a property with surchargeable damage. The contractor also reviews title evidence and extension requests for title evidence, which are generally submitted after conveyance. Table 2 shows the percentage of properties conveyed from 2011 through 2017 for which the compliance contractor met the 5 business day requirement, according to our analysis of FHA data. Although the contractor mostly met the required time frames, when it did not, the delay may have lengthened the time to complete the conveyance process. Our analysis of FHA data indicates that when the compliance contractor missed the deadlines, it missed them by a median of 4–10 days, depending on the requirement. The compliance contractor’s review of overallowable requests, conveyance extension requests, and surchargeable damage requests generally occurs during the preconveyance period when servicers have 30 days to convey the property to FHA. As noted earlier, some servicer representatives we interviewed said that waiting for the compliance contractor to approve or deny overallowable requests hindered their ability to convey the property in 30 days. The compliance contractor must complete at least 95 percent of the reviews within the 5-day time frame to meet FHA’s standard for minimum acceptable performance. FHA uses monthly scorecards when reviewing the contractor’s performance against this standard. FHA officials told us they had not issued any deficiency notices to the current compliance contractor, but that discussions with the contractor can occur when it does not meet the 95 percent standard in particular months. FHA officials also noted that some of the contractor’s reviews may take longer than 5 days if resolving them requires obtaining additional documentation or substantial back-and-forth communications with the servicer. Maintenance contractors. After conveyance, FHA’s maintenance contractors have 2 calendar days from the date they are assigned a property to conduct the comprehensive property inspection and upload the results into a HUD Property Inspection Report in FHA’s asset disposition system. They then have 5 calendar days to complete a Property Condition Report, which details the functionality of the property’s systems, the existence of any transferable warranties, and any legal actions, such as code violations or pending demolition orders. FHA starts measuring compliance with these time requirements 24 hours after the properties are assigned to the compliance contractor (to account for holidays and late afternoon assignments). According to our analysis of FHA data, the maintenance contractors completed property inspections and uploaded the results within 3 days (the 2-day requirement plus 24 hours) for about 90 percent of properties conveyed from 2011 through 2017. The contractors met the 5-day requirement to complete the Property Condition Report about 77 percent of the time. When the maintenance contractors missed these time frames, they missed them by a median of 1 and 2 days, respectively. The longer a property remains uninspected after the servicer has conveyed it, the greater the chance that it will be damaged or vandalized before inspection. If a property is damaged during this period, disputes may arise between FHA and the servicer about which entity is responsible for the damage. FHA is responsible for maintaining the property once the servicer complies with all HUD regulatory requirements leading to conveyance, including filing the deed (in FHA’s name) for record and filing the conveyance claim. However, FHA may hold the servicer responsible for the damage if the claim was suspended due to the need for review or correction resulting from certain types of noncompliance with HUD requirements or if the servicer could not prove the damage occurred after FHA became responsible for maintaining the property. Disagreement over this issue can add time to the conveyance process. FHA measures each maintenance contractor’s performance monthly using a formula that considers both the contractor’s timeliness in completing property inspections and uploading the results (2-day requirement plus 24 hours) and in completing the Property Condition Report (7-day requirement plus 24 hours) for each property. If the contractor misses either deadline, it is not considered timely for that property. FHA considers timeliness for 95 percent of properties each month as satisfactory. According to FHA officials, FHA has taken actions when the performance of maintenance contractors was not satisfactory. A HOC official said that the actions may include issuing a defective performance letter, which requires the contractor to provide a remedy plan, and issuing a cure notice in coordination with HUD’s contracting office. FHA Changed Aspects of the Conveyance Process, but Policies and a Pilot Program Still Have Limitations FHA’s Updates to Property Preservation Allowances and Data Systems Partly Addressed Certain Servicer Challenges FHA updated aspects of the conveyance process in recent years to help address some of the challenges experienced by servicers and the agency. For example, FHA increased property preservation and protection allowances in 2016 to help address servicer feedback and to better align allowances with other mortgage industry participants, according to FHA officials. In February 2016, FHA issued Mortgagee Letter 2016-02, which increased allowance amounts that servicers may claim for specific types of property preservation and protection work. It also increased the total maximum amount servicers may claim for a property without submitting an overallowable request from $2,500 to $5,000. However, the mortgagee letter eliminated all exclusions from the maximum amount, which previously included one-time major repairs, such as a roof replacement. FHA officials said the agency increased the allowance amounts to account for the standard increases in property preservation costs over time, and to align allowances with those of the enterprises and the Department of Veterans Affairs. Seventeen of the 20 servicers we interviewed said that FHA’s current property preservation and protection allowances are not sufficient to complete the work needed to convey properties. While representatives of eight of the 20 servicers told us the changes FHA made to allowances in 2016 helped them complete work within allowance amounts, representatives of the remaining 12 servicers said the changes did not help or helped in some ways but presented more challenges in other ways. Representatives of an association of mortgage lenders and servicers said that they preferred the previous system, because some work was excluded from the maximum allowance. For example, representatives of one servicer said that due to the 2016 changes, they now must submit an overallowable request for standard maintenance items, such as grass cuts, once they have exceeded the maximum allowance amount. Our analysis of FHA data found that the percentage of properties with at least one overallowable request increased steadily from 2011 through 2017—from about 53 percent to about 90 percent—despite the 2016 changes (see fig. 11). For properties with at least one overallowable request, the median number of requests before the 2016 changes (from July 1, 2010, through February 4, 2016) was seven, compared with eight after the changes (from February 5, 2016, through the end of 2017). In 2017 alone, the median number of overallowable requests per conveyed property was 13. However, it may be too early to tell what effect the 2016 mortgagee letter will have on servicer’s ability to conduct work within the allowances. FHA officials said that although the change in the allowance amounts was partly intended to reduce overallowable requests, the poor condition of many properties with extended default and foreclosure periods may have increased such requests. The officials stated that some of these properties were still being conveyed to FHA in 2017. FHA enhanced the information system servicers and contractors use to manage conveyed properties, but officials noted the need to update another system FHA uses to process and pay claims. In March 2018, FHA incorporated its preconveyance inspection pilot, discussed in more detail later in this report, into the asset disposition system, the information system servicers use to convey properties to FHA. FHA officials and contractors also use the system to track properties from conveyance through REO sale. With the update, servicers may request a preconveyance inspection and see the results of the inspection in the system, according to FHA officials. Before FHA added the pilot to the asset disposition system, servicers and FHA used email to communicate about properties in the pilot. In October 2016, FHA added a feature to the asset disposition system that enables FHA officials, contractors, and servicers to electronically monitor the status of reconveyed properties. According to FHA officials, all communication between FHA and servicers on reconveyed properties previously was by email, including the servicer’s notification to FHA that it was ready to convey a property again, and the photographs required to document property condition. However, according to FHA officials, FHA’s claims system is not equipped to process more than one claim per property, so claims for properties FHA reconveys and which the servicer then conveys to FHA a second time must be processed manually. Officials from FHA’s Office of Financial Services said that manual processing delays claim payments to servicers—sometimes by more than a year. Seven of the 20 servicers we interviewed identified delayed claim payments for reacquired properties as a challenge. FHA officials said that they have made an internal business case for funding to modernize the system, but have not succeeded in securing the funding in prior years. FHA Updated Its Policies and Procedures on Conveyance Condition, but Limitations Remain FHA updated its written direction to servicers on conveyance condition in 2016, but limitations in the contents and methods of communicating these policies and procedures have contributed to compliance challenges for some servicers. In its February 2016 mortgagee letter, FHA re- emphasized its existing directions to servicers about property conveyance, provided additional details on how to calculate claim amounts and document property preservation and protection work, and clarified descriptions of some preservation and protection requirements. Additionally, in December 2016, FHA issued an updated single-family housing policy handbook that consolidated all policies and procedures for servicers into one document, including those on maintaining and conveying foreclosed properties. However, servicers and other industry stakeholders with whom we spoke and our review of FHA’s policies and procedures on conveyance condition identified several limitations, as follows. Lack of clarity or specificity. Representatives from 15 of the 20 servicers we interviewed said they found FHA’s policies and procedures on conveyance condition to be unclear or subjective, and 13 cited specific parts of the conveyance condition standards they found to be unclear or missing. For example, one servicer was unsure about the extent of repairs required when a property had water seepage in the basement. We found that FHA’s policies and procedures include information on how to treat a basement that is flooded or a property with moisture damage, but does not address basement leaks, cracks, or seepage. Representatives of four servicers said that FHA’s policies and procedures do not sufficiently address how servicers should handle properties with potential structural or foundation damage. Consistent with this viewpoint, we found that FHA’s handbook and mortgagee letter do not explain what a servicer should do if it believes a property has damage affecting its structural integrity. In addition, representatives of three servicers said FHA’s expectations of them are unclear when a roof is damaged but does not currently have a leak. According to FHA’s policies and procedures, servicers must ensure all roofs “are free of active leaks or other sources of water intrusion.” However, FHA does not specify what servicers should do if there is roof damage but no active leak. Two of the servicers said they were uncertain whether they should replace the damaged roof that is not leaking, or convey the property and risk reconveyance if it rains before FHA inspects the property and the roof leaks. Perceived inconsistency in interpretation. Representatives from 10 of the 20 servicers we interviewed said FHA is somewhat or not at all consistent in determining whether properties meet FHA’s conveyance requirements. Among the remaining 10, one stated that FHA is completely consistent and nine said that FHA is mostly consistent. In addition, two of the 20 servicers said the answers they receive from FHA to the same question differ depending on whom they ask. HOC officials also noted cases in which their interpretation of policies and procedures differed from the compliance contractor’s. For example, officials from three HOCs told us that the compliance contractor sometimes disagrees with their determination that a property is not in conveyance condition when the contractor reviews the HOC’s reconveyance decision. Limited communication methods. In addition to formal written policies and procedures on conveyance condition, FHA fields servicer questions, primarily through its compliance contractor, by phone. The compliance contractor also issues an annual newsletter on topics such as common reconveyance triggers and best practices for submitting successful overallowable and extension requests. However, some servicers we interviewed suggested other possible ways to communicate policies and procedures that they said they would find helpful, including the following: Representatives of five servicers said they would like FHA to publish an authoritative set of frequently asked questions (FAQ) on conveyance condition. FHA has an FAQ web page that includes information on conveyance condition, but, as of April 2019, did not include FAQs about the specific property preservation and protection issues discussed above (water seepage, structural integrity, and roof damage with no active leaks). In addition, a link in the web page labeled “foreclosure/conveyance” led to a few FAQs on conveyance condition and property preservation requirements, but the answers consisted solely of language from FHA’s existing policies and procedures. One servicer’s representatives suggested that FHA could issue policies and procedures in a format similar to Fannie Mae’s Property Preservation Matrix and Reference Guide. This guide has features that FHA’s policies and procedures do not have, as discussed below, including photographic examples, detailed requirements for photographic documentation, and “if-then” statements detailing what servicers should do if they encounter certain challenges at a property. Representatives of two servicers suggested that FHA host regular industry calls. While the compliance contractor told us that it takes ad hoc calls and holds regular teleconference calls with a number of individual servicers, an FHA official told us the contractor is only authorized to respond to servicer questions by providing relevant parts of FHA’s written policies and procedures and is not supposed to respond with interpretations (clarifications, or explanations) of existing policies and procedures. Representatives from one servicer said industrywide calls with FHA staff would give servicers a way to obtain fuller explanations of FHA’s expectations. One servicer suggested that FHA provide training to servicers about the conveyance process. The servicer noted that while FHA provides training on other aspects of its program, including loss mitigation, it does not do so for the conveyance process or submitting claims. Limited direction on photographic evidence. FHA’s policies and procedures provide instructions for servicers and contractors on how to document property conditions, but contain limited direction on photographic evidence. Servicers must thoroughly document the condition of the property when they first obtain possession so that FHA does not hold them responsible for damage caused by the borrower. Servicers also must take before and after pictures of any work they do on the property. FHA’s policies and procedures on photographic documentation say only that the servicer must use digital photography, ensure a date-stamp is printed within each photograph, and ensure that each photograph is labeled to describe the contents of the photograph. FHA has not communicated in writing any requirements for photograph dimensions, color, distance, framing, or content or suggestions for documenting conditions that may be difficult to see. Servicers and FHA officials stated that they face challenges in documenting property conditions in a way that most accurately informs the compliance contractor about the property. The compliance contractor reviews documentation, including photographs, uploaded into the asset disposition system by servicers to make decisions on overallowable and extension requests. The compliance contractor also reviews documentation from maintenance contractors on inspection results and reconveyance recommendations by HOC officials. An FHA maintenance contractor told us that the compliance contractor sometimes responds that the condition described is not apparent from the photographs in the asset disposition system. According to members of an industry group representing servicers, in some cases this may result in FHA requiring servicers to repair damage caused by the borrower, because the servicers’ photographs did not prove the damage was present when they first gained possession of the property. To illustrate how photographs can effectively or ineffectively capture property condition problems, figure 12 provides two examples of flooring issues at properties conveyed to FHA. In one photograph, the buckling of the floor is apparent, but in the other, the waterlogged and warped condition of floor is harder to discern. An experienced FHA maintenance contractor told us there are creative ways to document some conditions that are difficult to photograph. For example, to document a damp floor, one can photograph a piece of paper (which darkens when wet) before and after placing it on the floor. This method is not included in FHA’s handbook or mortgagee letter. Limitations in the content and delivery of FHA’s policies and procedures on conveyance condition suggest room for improvement and are inconsistent with the federal internal control standard for communicating externally. This standard calls for management to externally communicate the necessary quality information to achieve an entity’s objectives. Federal agencies can help ensure compliance by communicating with and obtaining information from external parties and by periodically evaluating and selecting appropriate methods of communication, taking into account factors such as the audience, the purpose and type of information being given, and legal or regulatory requirements. However, FHA has not identified where the conveyance condition policies and procedures could be improved because it has not assessed information from servicers—for example, the frequency or content of their questions to the compliance contractor. FHA also has not thoroughly evaluated its methods for communicating its policies and procedures. As a result, FHA has limited assurance that servicers understand FHA’s expectations for conveyed properties and that contractor decisions are made consistently. Weaknesses in these areas can contribute to inefficiencies such as delays in executing conveyances and reconveyance of properties to servicers. FHA Has Not Provided Direction on Alternatives to Reconveyance for Properties That Do Not Meet Conveyance Condition Standards FHA has not provided written direction to HOC officials on choosing among alternatives to address conveyed properties that do not meet FHA’s condition standards. According to officials from FHA headquarters and the National Servicing Center, HOC officials can (1) reconvey the property’s title to the servicer, (2) issue a demand letter establishing a debt to FHA for the cost of the work needed, or (3) enter into a reconveyance bypass agreement with the servicer that requires the servicer to complete repairs within a certain number of days. The latter two options avoid reconveyance and therefore may expedite resale of the property. These three options are mentioned in different parts of FHA’s policies and procedures, but the agency has not outlined the circumstances that would warrant use of each method. FHA has not provided direction to the HOCs, partly because HOC officials have the authority to choose a method based on the expected financial return on the property. However, HOC officials with whom we spoke differed in the factors that they considered when deciding how to address properties that do not meet FHA’s conveyance condition standards. Officials from three HOCs cited criteria that any property with more than $5,000 in damage due to servicer neglect should be considered for reconveyance, while a bypass agreement or demand letter may be issued if the amount of servicer neglect is less than $5,000. However, FHA officials were not able to tell us where this criterion is written. An official from the fourth HOC said the decision to reconvey partly depends on the strength of the housing market. If the HOC believes it can sell the property in its current condition—even if the condition does not meet FHA’s conveyance standards—the HOC will be more likely to issue the servicer a demand letter than reconvey the property. In contrast, an official from one of the other HOCs told us the state of the housing market did not factor into decisions on reconveyance. Furthermore, according to FHA officials, HOCs may also reconvey a property with only small amounts of damage if the servicer frequently conveys properties not in conveyance condition, in order to impress on the servicer the importance of complying with FHA requirements. The HOC officials generally agreed that bypass agreements offer a way for small repairs to be fixed quickly. However, an official from one HOC said the HOC did not issue bypass agreements often because servicers’ property preservation and protection vendors may take longer than the time specified in the agreement to complete repairs and, since the title is in FHA’s name, FHA has no recourse with the servicer. An official from another HOC also said that he did not like issuing bypass agreements because servicers do not always complete repairs quickly. FHA does not produce reports on the HOCs’ use of reconveyance, demand letters, and bypass agreements, so the frequency with which the HOCs employ these methods is unknown. Some servicer representatives with whom we spoke noted apparent inconsistency among the HOCs. For example, representatives of three servicers said that some HOCs do not issue bypass agreements at all. Similarly, representatives of one servicer told us they have infrequently, if ever, received a demand letter for small condition issues at properties; rather, FHA reconveys the properties for minor condition issues. FHA’s lack of written direction on alternatives to reconveyance is inconsistent with federal internal control standards, which call for designing control activities, including policies, to achieve objectives. Granting HOC officials discretion in dealing with properties that do not meet condition standards gives them flexibility to respond to specific circumstances. However, without written direction on factors to consider when determining whether they should reconvey a property, issue a demand letter, or enter into a bypass agreement with the servicer, FHA lacks reasonable assurance that HOCs make determinations consistently and in line with the agency’s regulatory goals for the REO program—to dispose of properties in a manner that expands home ownership, strengthens neighborhoods and communities, and ensures a maximum return to the mortgage insurance fund. Balancing these goals may require using different methods to address properties that do not meet conveyance standards. For example, in some cases issuing a demand letter or a bypass agreement for certain properties may result in FHA marketing and selling the property more quickly than it would by reconveying the property. A quicker sale, in turn, may help avoid the negative effects of a vacant property on the surrounding neighborhood. However, if FHA accepts a property in poor condition, it may receive less in proceeds when selling the property, which negatively affects FHA’s mortgage insurance fund. FHA Does Not Have a Plan to Evaluate Its Preconveyance Inspection Pilot FHA began a pilot program in 2017 to inspect properties that meet certain criteria before conveyance, but has not developed a plan to assess the results of the pilot program. FHA selected three large servicers to participate in this preconveyance inspection pilot. These servicers may request preconveyance inspections for properties with characteristics that increase their chances of being reconveyed, according to FHA officials. For example, eligible properties include those that experienced recurring vandalism, received overallowable repairs of greater than $5,000, or have potential structural defects, foundation issues, or damp or wet basements. Based on the inspection results, the properties are approved to convey, approved to convey subject to repair with no additional inspection, or denied conveyance through the pilot (see table 3). After conveyance, FHA inspectors conduct a thorough inspection to confirm that the property meets conveyance condition standards. Properties that do not meet the standards may be reconveyed. As of November 2018, FHA had not developed plans for evaluating the effectiveness of the pilot in achieving the goals of reducing the number of properties reconveyed due to property condition and minimizing the time it takes to convey properties. FHA officials told us that they will develop a plan to assess pilot outcomes when sufficient data are available. However, without an evaluation plan, FHA may not collect the right information during the pilot to rigorously assess results. GAO’s guide for designing evaluations states that key components of an evaluation design include the evaluation questions or objectives; information sources and measures; data collection methods; an analysis plan, including evaluative criteria or comparisons; and an assessment of study limitations. Certain characteristics of FHA’s pilot underscore the importance of incorporating these components into evaluation design. For example, because the pilot is intended to expedite the conveyance process through preconveyance inspections, it will be important to isolate the impact of the inspections, potentially by making comparisons to a control group. A properly selected control group can rule out competing explanations for observed outcomes. Additionally, because the pilot may affect participating servicers in ways that extend beyond the speed of the conveyance process or the probability of reconveyance, it will be important for FHA to thoroughly consider the information sources and measures it uses, including participant feedback. For example, representatives of the three participating servicers told us they had concerns about FHA holding properties in the pilot to higher conveyance condition standards than nonpilot properties and the time it takes to complete the preconveyance inspection process. According to the representatives, this process, which includes 7 calendar days for the inspection and 5 business days for the HOCs to review the inspection report, has resulted in longer holding times and increased vandalism risks. Without a well-designed evaluation, FHA risks making decisions about preconveyance inspections based on incorrect or incomplete information on the pilot’s benefits and drawbacks. Conclusions While FHA increased the use of other property disposition methods in recent years, servicers still convey thousands of foreclosed properties to FHA annually. If the process of transferring ownership from the servicer to FHA is not efficient, these properties may sit vacant for prolonged periods, deteriorate, and contribute to neighborhood decline. As a result, it is critical for FHA to have effective and efficient policies and procedures for the conveyance process. While FHA has made recent updates to its handbook, mortgagee letters, and information systems, additional improvements would better align its processes and procedures with federal internal control standards and GAO guidance on designing evaluations: By addressing limitations in the content (including its detail) and communication of its policies and procedures on conveyance condition, FHA could help reduce uncertainty and inconsistency in the conveyance process that may contribute to inefficiencies, such as reconveyance of properties to servicers. Second, by providing direction to HOC officials on factors to consider when deciding whether to use alternatives to reconveyance for properties that do not meet conveyance condition standards, FHA could increase the likelihood that alternatives will be used consistently and in line with FHA’s goals for the REO program. Third, by developing a plan for how it will evaluate the outcomes of the pilot to inspect certain properties prior to conveyance, FHA could help ensure the pilot generates the performance information needed to make effective management decisions about future policies. By addressing these issues, FHA could make the conveyance process more efficient and therefore help reduce negative impacts on neighborhoods. Recommendations We are making the following three recommendations to FHA: The Commissioner of FHA should enhance the content and communication of FHA’s policies and procedures on conveyance condition, including by considering the program stakeholder views discussed in this report and other stakeholder input. (Recommendation 1) The Commissioner of FHA should provide written direction to HOC REO directors on factors to consider when determining whether to reconvey a property with condition issues, issue a demand letter, or enter into a bypass agreement with the servicer. (Recommendation 2) The Commissioner of FHA should develop a formal plan for evaluating the outcomes of the preconveyance inspection pilot that includes key elements of evaluation design—such as evaluation objectives and measures—and utilizes participant feedback and control groups, as appropriate. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to FHA, the Department of Veterans Affairs, and the Federal Housing Finance Agency (the conservator and regulator of Fannie Mae and Freddie Mac) for their review and comment. The Department of Veterans Affairs and the Federal Housing Finance Agency did not provide comments. FHA provided written comments reproduced in appendix II. FHA neither agreed nor disagreed with our first recommendation to enhance the content and communication of its policies and procedures on conveyance condition. FHA cited the 2016 updates to its policy handbook and mortgagee letter and said it recognized the importance of external communication, training, and in- person meetings to ensure servicers have the information they need to operate in compliance with FHA programs. Our report discusses these updates, but also identifies areas for additional improvements to address limitations in the clarity and comprehensiveness of FHA’s policies and procedures and methods for communicating them. FHA agreed with our second and third recommendations to provide written direction on considering alternatives to reconveyance and to develop a plan for evaluating the preconveyance inspection pilot. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Housing and Urban Development, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) timelines for Federal Housing Administration (FHA) foreclosed property conveyances in June 2010– December 2017 and the extent to which servicers and FHA met time requirements and (2) changes FHA has made to the conveyance process in recent years and any ongoing process challenges. Time Lines for Property Conveyances To address the first objective, we obtained data from FHA’s Single Family Insurance System–Claims Subsystem and from the P260 Asset Disposition and Management System (asset disposition system) on the 610,802 foreclosed properties mortgage servicers conveyed to FHA from January 1, 2010, through December 31, 2017. For purposes of our analysis, we generally excluded properties conveyed to FHA from January 2010, through June 2010 because they were managed using different data systems and contractors than FHA currently uses. After excluding these properties, we analyzed data for 544,421 properties conveyed to FHA from July 2010 through December 2017. (We use calendar years in this report unless otherwise noted.) We calculated the number of days it took each property to complete the conveyance process. We defined the start of the conveyance process as the date the servicer obtained possession and acquired marketable title for a property and the end of the process as the date on which FHA assigned a marketing contractor to sell the property. For each annual cohort of conveyed properties, we calculated the 25th, 50th (median), and 75th percentile time frames and compared these statistics across years. To analyze the effect that reconveyances had on the length of the conveyance process, we compared length of time for conveyance in 2012–2017 for properties that were reconveyed to those that were not. According to FHA staff, data on reconveyances were unreliable prior to 2012, so we excluded those properties from this comparative analysis. We interviewed FHA officials about factors that may have affected conveyance time frames from 2010 through 2017, including increased use of other disposition methods and servicers delaying foreclosure actions and the resulting impact on property conditions, since the asset disposition system does not disclose the reasons for any delays. To analyze changes in the use of different property disposition methods and to examine the loss severity rates for these methods, we reviewed FHA data for fiscal years 2010–2017. To understand the relationship between properties with long default and foreclosure periods and conveyance time frames, we measured the time between the borrower defaulting on the mortgage and the servicer obtaining title to and possession of the property (effectively, the end of the foreclosure process) for properties conveyed to FHA from July 2010 through December 2017. We divided the range of default and foreclosure periods into four quartiles. For each quartile, we calculated the length of the conveyance process at the median and at the 25th and 75th percentiles. We then compared these statistics across quartiles. To determine the extent to which mortgage servicers and FHA contractors met their respective time requirements for the conveyance process, we identified relevant time requirements in Department of Housing and Urban Development (HUD) regulations and policies. We also reviewed the performance work statements for FHA’s mortgagee compliance manager (compliance contractor) and field service managers (maintenance contractors) to identify the contractors’ time requirements for the conveyance process. For servicers and contractors, we selected key time requirements for which electronic data were available, including the following: Thirty calendar days from acquiring title and possession of a property, plus the length of any approved time extension, to convey property to FHA. Forty-five days from conveying a property to FHA, plus the length of any approved time extension, to provide FHA with title evidence. Five business days to review each overallowable request submitted by a servicer. Five business days to review the sufficiency of title documentation submitted by the servicer. Five business days to determine whether a servicer can convey a property with surchargeable damage. Five business days to approve or deny a servicer’s conveyance or title extension request. Two calendar days, plus an additional 24 hours, to complete and upload the HUD Property Inspection Report from the date the property was assigned. Five calendar days to complete a Property Condition Report from the date the Property Inspection Report was completed. For each property, we calculated the number of days it took servicers and contractors to complete these required steps in the conveyance process and compared that number to the maximum number of days FHA allows for each step. For each annual cohort of properties conveyed in 2010– 2017, we calculated the 25th, 50th (median), and 75th percentile time frames for completing the steps. We also calculated the percentage of properties for which servicers or FHA contractors met their time requirements for each step. We reviewed FHA’s procedures for monitoring the performance of compliance and maintenance contractors for conveyed properties. We also reviewed examples of contractor quality control plans and FHA quality control reports and scorecards used to assess the contractors’ compliance with minimum time frames and other requirements. Additionally, we interviewed FHA officials about the contractors’ compliance with their respective time requirements and what steps FHA took, if any, to address any noncompliance. We assessed the reliability of data from the Single Family Insurance System–Claims Subsystem and the asset disposition system by reviewing FHA documentation about the data systems and data elements. We interviewed FHA staff and contractors knowledgeable about the data to discuss interpretations of data fields and trends we observed in our analysis. We also conducted electronic testing, including checks for outliers, missing data fields, and erroneous values. We excluded from each analysis properties with missing or erroneous information in the applicable data fields. We also excluded from each analysis properties for which the applicable data fields were five absolute deviations from the median (which we consider to be outliers). In addition, we excluded certain properties conveyed in calendar years 2010–2017 that had conveyance dates that were out of sequence. For example, we excluded properties for which the date a servicer obtained possession and good and marketable title occurred after the date the servicer conveyed the property to FHA. The number of properties we excluded in any analysis using these methods represents no more than 3.2 percent of properties conveyed from July 2010 through December 2017, which we consider to be insignificant when compared to the remaining properties included in the analysis. After taking these steps, we believe that the data were sufficiently reliable for purposes of characterizing the overall length of FHA property conveyances and compliance with key time requirements. Changes to Conveyance Process and Ongoing Challenges To determine what changes FHA made to the conveyance process in recent years, we reviewed relevant FHA regulations, policies, and procedures issued in 2010 or later, including FHA’s February 2016 mortgagee letter (a written instruction to FHA-approved lenders) on conveyances. We compared the requirements and property preservation and protection allowance amounts in the mortgagee letter to those in the prior mortgagee letter. We also reviewed FHA documentation on changes to the asset disposition system, FHA’s data system for conveyed properties, and on FHA’s preconveyance inspection pilot program that began in 2017. We interviewed FHA officials on the reasons for the recent changes and on the extent to which they reviewed any analogous requirements and property preservation and protection allowances of other mortgage entities (including Fannie Mae, Freddie Mac, and the Department of Veterans Affairs) when making the updates. To supplement our review of FHA’s recent changes to property preservation and protection allowances, we used the asset disposition system data to analyze changes in the frequency and number of servicer overallowable requests since the 2016 mortgagee letter went into effect. To examine what, if any, challenges exist with the conveyance process, we randomly selected a nongeneralizable sample of 20 large- and medium-sized, bank and nonbank servicers of FHA-insured mortgages. We defined large-sized servicers as those with 100,000 or more active FHA-insured mortgages as of December 31, 2017, and medium-sized servicers as those with 10,000–99,999 active FHA-insured mortgages as of that date. These servicers accounted for more than one-third of active FHA-insured mortgages as of December 31, 2017. We conducted semistructured interviews with the servicers about their experience with FHA property conveyances, including the sufficiency of FHA’s policies and procedures, time lines, and allowance amounts and any challenges they experienced with the process. We also discussed the extent to which the 2016 mortgagee letter assisted or hindered their conveyance efforts. In addition, we spoke with two national industry groups representing mortgage servicers about recent changes and any challenges their members experienced with the conveyance process. We reviewed FHA’s requirements for servicers and contractors on conveyed properties. In cases in which servicers stated that FHA’s policies and procedures on particular conveyance requirements was insufficient or unclear, we examined the 2016 mortgagee letter, HUD’s single-family housing policy handbook, and frequently asked questions on HUD’s website—to determine whether it addressed the topics and was sufficiently thorough to be applied to properties with different circumstances. We assessed whether the policies and procedures were consistent with federal internal control standards for external communication. In particular, we examined whether the policies and procedures communicated necessary quality information to achieve program objectives and whether FHA had evaluated appropriate methods to communicate them. Where applicable, we also compared FHA’s policies and procedures to features of Fannie Mae’s guide for servicers on how to preserve and protect vacant properties. We also assessed FHA’s policies and procedures on reconveyances and alternatives to reconveyance against federal internal control standards for designing control activities. To review the preconveyance inspection pilot that FHA began in 2017 and any challenges with the pilot, we interviewed the three participating servicers about FHA’s implementation of the pilot and the extent to which preconveyance inspections reduced the likelihood of reconveyance or addressed other challenges. We spoke with FHA National Servicing Center officials about their monitoring of pilot outcomes and their plans for assessing results. We assessed FHA’s planning and evaluation efforts against key components of evaluation design from GAO’s guide for designing evaluations. Furthermore, we interviewed a number of individuals and entities about challenges they experienced in implementing their property conveyance responsibilities, the sufficiency of FHA’s policies and procedures, and methods for assessing contractor performance. These included FHA headquarters and National Servicing Center officials with responsibilities for aspects of the conveyance process; FHA’s compliance contractor; and Real Estate-Owned Division officials, the largest maintenance contractor, and staff responsible for overseeing the maintenance contractors at each of FHA’s four homeownership centers. Finally, we visited eight recently conveyed or reconveyed properties in the Baltimore, Maryland, and Atlanta, Georgia, areas to observe property conditions, learn about the maintenance contractors’ property inspection processes, and understand challenges in documenting and addressing condition issues. We chose these locations to provide some geographic dispersion and coverage of different FHA homeownership centers. The properties were selected by Philadelphia and Atlanta homeownership center staff based on our request to visit a mix of recently conveyed and reconveyed properties in metropolitan areas and time periods that we chose. As a result, the conditions we observed are illustrative rather than representative of all conveyed properties. We conducted this performance audit from September 2017 to June 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Housing and Urban Development Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Westley (Assistant Director); Melissa Kornblau (Analyst in Charge); Rachel Batkins; William Chatlos; Emily Flores; John McGrail; Samuel Portnow; Barbara Roesmann; Tovah Rom; and Jena Sinkfield made key contributions to this report.
Why GAO Did This Study FHA insures hundreds of thousands of single-family home mortgages annually. When an FHA borrower defaults, the mortgage servicer in many cases forecloses, obtains title to the property, and conveys ownership to FHA. FHA inspects the property, acquires it if it complies with condition standards and title requirements, and lists the property for sale. FHA may reconvey noncompliant properties to servicers. During conveyance, homes may sit vacant for months and can deteriorate, contributing to neighborhood blight. Senate Report 114-243 included a provision for GAO to review FHA's effectiveness and efficiency in reaching determinations of conveyable condition. This report discusses (1) timelines for FHA property conveyances in 2010–2017 and whether servicers and FHA met time requirements, and (2) changes FHA has made to the conveyance process in recent years and any ongoing process challenges. GAO analyzed FHA data on properties conveyed in 2010–2017, reviewed FHA's policies and procedures, and interviewed 20 randomly selected mortgage servicers accounting for more than one-third of active FHA mortgages. What GAO Found From July 2010 through December 2017, the process for conveying foreclosed properties to the Federal Housing Administration (FHA) took a median of 70 days. The conveyance process—which GAO measured from a mortgage servicer's obtaining title to and possession of the property to FHA's marketing of the property—involves servicers making repairs, transferring ownership, and filing a mortgage insurance claim, and FHA inspecting the property. FHA attributes the length of time to complete the process partly to foreclosure processing delays that left properties vulnerable to damage and vandalism, which can increase the time servicers need to bring properties into conveyance condition. Property damage also may increase the likelihood that FHA will reconvey a property (transfer it to the servicer) for not complying with condition standards, further extending the conveyance process. For about 55 percent of properties conveyed in July 2010–December 2017, servicers exceeded the required time to obtain title and possession of a foreclosed property and convey it to FHA. For 2017 alone, the corresponding figure was 72 percent. As a result, servicers were not eligible to be reimbursed for all repairs and interest expenses for those properties when filing insurance claims with FHA. In recent years, FHA changed aspects of its conveyance process to help address some of the execution challenges the agency and servicers have faced. For example, in 2016, FHA enhanced its data system for conveyed properties to reduce manual administrative processing. FHA also began a pilot program in 2017 to decrease the number of properties FHA reconveys by inspecting properties before conveyance. However, GAO found shortcomings in FHA policies, procedures, and assessment efforts that are inconsistent with federal evaluation criteria and internal control standards, as follows: FHA's policies and procedures lack detail that could help servicers and contractors determine if a property is in compliance, and the agency has not examined alternative methods of communicating this information. Fifteen of the 20 servicers GAO interviewed said existing policies, procedures, and communications often were not clear or specific enough to address property conditions or repair decisions they encountered. FHA also relies on brief written policies to explain standards and makes limited or no use of other methods, such as photographs or industry-wide calls. FHA has not provided written direction on when to use alternatives to reconveyance—such as agreements under which servicers make repairs or repay FHA for any repair costs after conveyance—for properties not meeting condition standards. In the absence of such direction, FHA may not be addressing these properties in the most consistent or effective manner. FHA has not developed a plan to assess the outcome of its inspection pilot. Without rigorous assessment, FHA risks making decisions about the future of the pilot based on inaccurate or incomplete information. Addressing these shortcomings could help improve the efficiency and effectiveness of FHA's property conveyance process. What GAO Recommends GAO recommends that FHA (1) enhance the content and communication of policies and procedures on conveyance condition, (2) provide written direction on alternatives to reconveyance, and (3) develop a plan to assess a pilot program. FHA agreed with the second and third recommendations and did not agree or disagree with the first.
gao_GAO-20-44
gao_GAO-20-44_0
Background According to the Senate Committee on Homeland Security and Governmental Affairs report concerning PMIAA, the purpose of PMIAA is to improve program and project management in certain larger federal agencies. The act includes requirements for OMB, OPM, and the 24 agencies listed in the CFO Act. PMIAA requires OMB’s Deputy Director for Management or the designee to, among other things: adopt government-wide standards, policies, and guidelines for program and project management for executive agencies; engage with the private sector to identify best practices in program and project management that would improve federal program and project management; conduct portfolio reviews of agency programs not less than annually, to assess the quality and effectiveness of program management, in coordination with Program Management Improvement Officers (PMIO); establish a 5-year strategic plan for program and project conduct portfolio reviews of programs on our High-Risk List. The two types of portfolio reviews required by PMIAA—the portfolio reviews of agency programs and the portfolio reviews of programs identified as high risk on our High-Risk List—are separate requirements. For purposes of this report, we define programs, projects, and portfolios consistent with how those terms are defined in OMB’s PMIAA strategic plan. OMB defines program as the functions or activities which agencies are authorized and funded by statute to administer and enforce. Programs typically involve broad objectives. OMB views projects as temporary efforts with defined scopes to create products or services to improve the efficient and effective implementation of programs. Because programs are comprised of projects, programs inherently address the projects subsumed within them. Consequently, our discussions of programs throughout this report also pertain to projects. Finally, OMB defines portfolios as organized groupings of programs whose coordination in implementation enables agencies to achieve their objectives. The act also established the Program Management Policy Council (PMPC), an interagency forum for improving agency practices related to program management. OMB’s Deputy Director for Management chairs the PMPC. The PMPC responsibilities include advising OMB on the development and applicability of government-wide standards for program management transparency. Furthermore, the act requires PMPC members “to discuss topics of importance to the workforce,” such as workforce development needs and major challenges across agencies in managing programs. As chair of the PMPC, OMB’s Deputy Director is required to preside at meetings, determine agendas, direct the work, and establish and direct its subgroups, as appropriate. The act requires the PMPC to meet not less than twice per fiscal year. Additionally, OPM’s Director, in consultation with OMB’s Director, is required to issue regulations that: identify key skills and competencies needed for a program and a project manager in an agency; establish a new job series, or update and improve an existing job series, for program and project management within agencies; and establish a new career path for program and project managers within an agency. Overall, OPM’s role in implementing PMIAA is to establish a new job series or update an existing job series by providing the occupational standards that agencies will need to develop a trained and competent workforce with the program and project management experience, knowledge, and expertise to solve management challenges and support agency decision-making. The act requires OPM to establish new—or revise existing—occupational standards in consultation with OMB. Occupational standards are included within OPM’s classification guidance, which is provided to agencies to assist in classifying positions. This guidance helps agencies to determine the proper occupational series, position title, and grade of each position. The act requires OMB’s Deputy Director of Management to oversee implementation of the standards, policies, and guidelines for executive agencies. OMB implemented some PMIAA requirements using existing processes put in place to implement GPRAMA. We previously reported that GPRAMA provides important tools that can help decision makers address challenges facing the federal government, such as the annual reviews of progress on agency strategic objectives conducted during strategic reviews and the implementation of federal government priority goals. Federal government priority goals, also known as cross-agency priority (CAP) goals, are written by OMB in partnership with agencies. GPRAMA requires OMB to coordinate with agencies to develop CAP goals, which are 4-year outcome-oriented goals covering a number of complex or high-risk management and mission issues. For example, OMB directed agencies to align their noninformation technology major acquisition programs with relevant strategic objectives so they could assess progress for the PMIAA required program portfolio reviews concurrent with required GPRAMA strategic reviews. GPRAMA also requires OMB to present a program inventory of all federal programs by making information available about each federal program on a website. Finally, GPRAMA required OMB to establish a number of CAP goals intended to cover areas where increased cross-agency collaboration is needed to improve progress towards shared, complex policy or management objectives across the federal government. OMB uses CAP goals to address issues outlined in the President’s Management Agenda. For example, OMB wrote a CAP goal to improve management of major acquisitions across the government which complements PMIAA and its required activities. PMIAA requires the OMB Deputy Director, as chair of the PMPC, to conduct portfolio reviews of programs from our High-Risk List. The PMPC is also required to review programs we identify as high risk and to make recommendations for actions to be taken by the Deputy Director for Management of OMB or a designee. See figure 1 below for an overview of roles and responsibilities of OMB, OPM, the PMPC, and agencies. OMB, OPM, and CFO Act Agencies Have Taken Steps to Implement PMIAA, but Some Program and Project Management Capacity Limitations Exist Agencies responsible for PMIAA implementation have taken steps to complete some requirements, but actions remain to fully implement the law (see Table 1). OMB’s PMIAA Strategic Plan Incorporated Leading Practices OMB met the PMIAA requirement “to establish a five-year strategic plan for program and project management.” The plan OMB developed details three key strategies to implement PMIAA: (1) coordinated governance, (2) regular OMB and agency engagement reviews, and (3) strengthening program management capacity to build a capable program management workforce. The three strategies focus on areas such as clarifying key roles and responsibilities, identifying principles-based standards, and identifying plans for enhancing workforce capabilities. The plan describes the roles and functions of the PMIOs, the PMPC, and the requirements of the agency implementation plans. It outlines a phased approach for implementing PMIAA actions with milestones occurring throughout the 5- year period. We found that OMB followed several strategic planning leading practices in the creation of the PMIAA strategic plan. First, the plan incorporates general goals and objectives for agencies’ implementation of PMIAA with three corresponding strategies explaining OMB’s overall approach. OMB followed a second leading practice by gathering input from stakeholders. OMB staff told us they solicited input from congressional staff, and members of external organizations like the Federal Program and Project Management Community of Practice (FedPM CoP). Agencies’ staff also confirmed to us that they had input into the OMB plan. Third, OMB demonstrated interagency collaboration in its efforts to establish and lead the PMPC and its efforts to work with the FedPM CoP to address any issues identified by agencies. Finally, the plan included a timeline with quarterly milestones to track completion of PMIAA’s activities and to gauge progress toward achieving the desired results of PMIAA. OMB’s Program and Project Management Standards Are Less Detailed Compared with Accepted Program and Project Management Standards PMIAA required OMB to establish standards and policies for executive agencies consistent with widely accepted standards for program and project management planning and delivery. A consistent set of government-wide program management standards and policies is important because it helps ensure that agencies utilize key program management practices to improve the outcomes of government programs. OMB published in June 2018 a set of standards for program and project management as part of OMB’s PMIAA strategic plan. OMB’s strategic plan directed agencies to apply these 15 standards to internal management processes for planning, implementing, and reviewing the performance of programs and activities. OMB staff told us they decided to develop this set of standards rather than adopt an existing set of consensus-based standards, such as the widely accepted standards for program and project management from the Project Management Institute (PMI). PMI is a not-for-profit association that provides global standards for, among other things, project and program management. The PMI standards are utilized worldwide and provide guidance on how to manage various aspects of projects, programs, and portfolios and are approved by the American National Standards Institute (ANSI). OMB staff told us that they decided not to specifically adopt the PMI standards because they wanted to allow agencies to use a range of standards that agencies had already developed and were using to manage their programs, such as standards developed in-house by NASA for their space flight programs. OMB further directed CFO Act agencies that the 15 standards and application of them should be incorporated or aligned with existing agency-specific program management policies and practices, and tailored to reflect program characteristics. OMB staff told us that they chose the approach to provide more principle-based standards, as opposed to specific standards, to be flexible enough for a range of government agencies to apply them. OMB’s standards are similar in definition to PMI standards, but they are less detailed by comparison. Our analysis of OMB’s standards shows that OMB uses similar definitions for all 10 of PMI’s program management standards and nine out of 10 of PMI’s project management standards, such as risk management and change management. However, OMB program and project management standards are less detailed when compared to PMI’s standards in the following ways: OMB standards do not provide a minimum threshold against which agencies can gauge to what extent they have met each standard. PMI’s Standard for Program Management provides the definition of a standard but also what components are required for an entity to confirm that the standard has been met. For example, meeting the program financial management standard in PMI requires a financial management plan to be developed, along with its related activities. This plan allows entities applying the standard to confirm whether they have met the standard for program financial management or not. OMB’s standards do not distinguish between how the standards apply differently to programs and projects while PMI has separate detailed standards for program management and for project management. The project management standards from PMI provide details on how the standards apply to more granular tasks, such as establishing a quality management or communication plan for a specific project. OMB’s standards do not distinguish between how the standards relate to each other during a program or project while PMI’s Standard for Program Management details how project standards help build on each other during a program. For example, a program scope management plan is needed to determine the type of schedule management planning that is necessary to accomplish the delivery of the program’s outputs and benefits. OMB provides minimal guidance on how standards apply differently across the life cycle of a program or project while PMI’s Standard for Program Management provides information detailing when a specific standard should be utilized in different ways during the life cycle of a program. For example, in the beginning of a program, risk management should be planned and an initial risk assessment created. Later, during program implementation, risk management tasks focus on monitoring, analyzing risk, and responding to risk. If the standards had the additional detail, it would be possible to determine if agencies are meeting them and properly applying them to programs and projects. OMB Does Not Have a Detailed Governance Structure for Further Developing Program Management Standards Our work on the Digital Accountability and Transparency Act of 2014 (DATA Act) standards has emphasized the necessity for a governance structure with a clear set of policies and procedures for developing and maintaining standards over time that are consistent with leading practices. A governance structure is important because it helps ensure that the standards are developed, maintained, adjusted, and monitored over time. The DATA Act is similar to PMIAA because PMIAA gives OMB responsibility to develop standards for program management, and the DATA Act gives OMB and the Department of the Treasury responsibility for establishing data standards for the reporting of federal funds. These standards specify the data to be reported under the DATA Act and define and describe what is to be included in each element with the aim of ensuring that information will be consistent and comparable. Several governance models exist that could inform OMB’s efforts to help ensure that the standards are developed, maintained, adjusted, and monitored over time. These models define governance as an institutionalized system of decision rights and accountabilities for planning, overseeing, and managing standards. Many of these models promote having a common set of key practices that include establishing clear policies and procedures for developing, managing, and enforcing standards. A common set of key practices endorsed by standards setting organizations including the National Institute of Standards and Technology, ANSI, and the American Institute of Certified Public Accountants recommend that governance structures should include the key practices shown in the text box below. Key Practices for Governance Structures 1. Delineating roles and responsibilities for decision-making and accountability, including roles and responsibilities for stakeholder input on key decisions. 2. Obtaining input from stakeholders and involving them in key decisions, as appropriate. 3. Developing and approving standards. 4. Making decisions about changes to existing standards and resolving conflicts related to the application of standards. 5. Managing, controlling, monitoring, and enforcing consistent application of standards. OMB staff told us they did not have any additional documentation about the governance structure used to develop the program management standards and how OMB will further develop and maintain them. We compared available information about OMB’s governance structure for developing and maintaining program management standards to the five key practices on governance structures and found OMB’s governance structure is incomplete in each of the five key practices. OMB has not delineated roles and responsibilities for decision-making and accountability, including responsibilities for stakeholder input on key decisions. OMB’s strategic plan notes that one role of the PMPC is to help further develop the program management standards. However, OMB has not provided information on how roles and responsibilities will be assigned to continue developing standards in the future. Without clearly delineated roles and responsibilities, there is a risk of confusion which could impede action and accountability for future improvements to program management standards. Further, having clearly delineated roles and responsibilities is particularly important during periods of transition when administrations change. OMB has an incomplete plan for how it will obtain input from stakeholders and involve them in decision-making. OMB received input from stakeholders on the standards it developed in 2018, though the strategic plan states that standards will be further developed with the PMPC in the fourth quarter of fiscal year 2020. However, the strategic plan does not give details on how the PMPC and others will further develop standards. Without robust and comprehensive outreach to individuals who will use or otherwise be affected by the standards, the opportunity to learn from stakeholder experience and perspectives, or anyone who will use or otherwise be affected by the standards, may be diminished. OMB has an incomplete process for developing and approving program management standards. OMB developed and approved the existing standards by obtaining stakeholder input and releasing their approved standards in its strategic plan. However, the strategic plan does not provide documentation on how that process was structured and how it will function in the future. Thus, it is unclear how OMB plans to further develop the standards and what responsibilities and resources will be required from OMB, the PMPC, and agencies under the leadership of the agency PMIOs. OMB has not defined a process for making decisions about changes to existing standards and describing how conflicts related to the application of standards would be resolved. Therefore, it is unclear if or how the standards will be periodically reassessed and updated as circumstances change and leading practices in program and project management are identified. Also, lack of consensus on standards and conflict over how to use them can lead to weakened acceptance and inconsistent application. OMB has not defined a process for managing, controlling, monitoring, and enforcing consistent application of standards. OMB has not developed or directed any type of review or oversight process to determine the adequacy of existing or newly developed standards agency use to manage programs. Having such a process could help agencies to achieve a balance between consistent application of standards and flexible application to account for differences in programs, agency missions, and other factors. However, OMB staff told us that they consider the PMIAA program portfolio review process as a way to help monitor and enforce program standards, as they have a view into how each agency is applying standards for their particular portfolio of programs. Additionally, OMB has given agencies flexibility in using existing agency standards and flexibility to adopt or develop new ones. Without a review mechanism, OMB lacks reasonable assurance that agencies’ efforts to use existing standards or develop new ones will align with government-wide efforts to improve program and project management. Also, establishing an approach to monitoring agencies’ efforts would help identify opportunities to improve program management standards. Without having a governance structure for the program standards, the potential exists that standards will develop in an ad hoc manner, may be applied inconsistently or not at all, and may not be updated to reflect new developments in program management. Further, having a governance structure for managing efforts going forward better positions OMB to sustain progress on program standards as they change over time. OMB Leveraged Existing Performance Reviews, but Reviews Are Limited to Major Acquisitions PMIAA requires agencies and OMB to regularly review portfolios of programs to assess the quality and effectiveness of program management and identify opportunities for performance improvement. To conduct these portfolio reviews, OMB Circular A-11 notes that agencies and OMB are to use a set of broadly applicable program management principles, practices, and standards associated with successful program outcomes, in addition to more specific standards based on the type of program under review. As a way to help agencies acclimate to the requirements of PMIAA, OMB leveraged two components of the GPRA Modernization Act of 2010 (GPRAMA): the strategic review and a cross-agency priority (CAP) goal. OMB guidance stated that agencies’ portfolio reviews of programs would be conducted and integrated to the extent practical with strategic reviews. Furthermore, OMB staff told us that the implementation of PMIAA and the CAP goal for improving management of major acquisitions (CAP Goal 11) shared complementary goals and strategies. For example, the CAP Goal 11 action plan includes the routine monitoring of federal program management progress. Consequently, OMB staff said they decided that the first PMIAA program portfolio reviews would focus on major acquisitions. Excerpt from OMB Cross-agency Priority Goal 11 from 2018 President’s Management Agenda: Improve Management of Major Acquisitions Federal agencies will ensure that contracts supporting transformative and priority projects meet or beat delivery schedules, provide exceptional customer service, and achieve savings or cost avoidance for the taxpayer. The Challenge: Major acquisitions—which vary in size by agency but often exceed $50 million—account for approximately one-third of annual federal spend on contracts. These large contracts frequently support projects meant to transform areas of critical need. Yet major acquisitions often fail to achieve their goals because many federal managers lack the program management and acquisition skills required to successfully manage and integrate large and complex acquisitions into their projects. These short- comings are compounded by complex acquisition rules that reward compliance over creativity and results. The Strategies: Agencies will pursue three strategies: 1) strengthen program management capabilities in the acquisition workforce; 2) use modern and innovative acquisition flexibilities; and 3) track investments using portfolio, program, and project management principles. OMB Reported Lessons Learned from Pilot, but Did Not Follow Most Leading Practices for Pilot Design In 2018, OMB conducted a pilot project involving program portfolio review focused on noninformation technology (IT) major acquisition programs. According to OMB staff, the pilot project gave agencies the opportunity to complete “dry runs” for the PMIAA-required portfolio reviews and to provide lessons learned in anticipation of the fiscal year 2019 portfolio reviews. OMB planned for the results from the pilot to provide information for internal dialogue and decision-making about subsequent portfolio reviews. Further, according to OMB’s strategic plan, the purpose of the pilot was (1) to determine how well agency program portfolios of non-IT major acquisitions were performing throughout the life cycle of the investment using a set of standards and practices, and (2) to refine the process of coordinating program portfolio reviews as a component of OMB agency strategic reviews. For the pilot, OMB staff directed agencies to assess the cost, schedule, and performance of agency-selected acquisition portfolios. One result from the pilot was that agencies demonstrated a range of maturity in their abilities to collect data for these required program portfolio measures from their various departments and program types. OMB staff told us pilot agencies found it easier to compile data on major construction projects compared to service contracts. Consequently, an agency doing many of these projects might be more advanced than an agency for which major acquisitions focus on services. Department of Veterans Affairs (VA) staff shared their lessons learned from their participation in pilot portfolio reviews, as seen in the text box below. OMB staff said that they determined that the portfolio review process worked sufficiently well for the pilot agencies and continued their planned strategy of focusing solely on non-IT major acquisition programs for fiscal year 2019 portfolio reviews. Example of Department of Veteran Affairs (VA) Lessons Learned from Pilot Portfolio Review The VA looked at the effectiveness of portfolio management during the Office of Management and Budget noninformation technology major acquisition pilot portfolio review by focusing on the agency’s adherence to best practices in assessing project performance and progress. VA officials said this pilot informed their decision-making and was successful in the following ways: 1. The pilot helped VA determine logical ways to manage a portfolio by showing what data were helpful to make impactful decisions. 2. VA learned how best to display the data on cost, schedule, scope, and quality of outcomes on a dashboard to make it accessible and comparable across the agency. 3. VA learned that it needs to collect better quality data so that project management principles can be instituted and aligned across the agency. A well-developed and documented pilot program can help ensure that agency assessments produce information needed to make effective program and policy decisions. Such a process enhances the quality, credibility, and usefulness of evaluations in addition to helping to ensure the effective use of time and resources. We have identified five leading practices that, taken together, form a framework for effective pilot design, as seen in the text box below. OMB fulfilled the first leading practice of establishing objectives in its design of the PMIAA pilot program portfolio review. OMB’s PMIAA strategic plan and the CAP Goal 11 Action Plan stated the objectives of the pilot. In addition to the two objectives listed in the PMIAA strategic plan, the CAP Goal 11 Action Plan lists seven pilot objectives, as seen in the text box below. PMIAA Pilot Program Portfolio Review Objectives 1. Perform portfolio management preparation activities 2. Identify first portfolio of major acquisitions 3. Align portfolio with agency strategic goals 4. Collect performance data for each item in the portfolio 5. OMB officials said that they did not structure the pilot to follow the remaining four leading practices for effective pilot design. However, OMB said that it learned that the pilot agencies demonstrated several program management capabilities. They also learned that it would be important to tailor portfolio reviews to the agency and the program to account for significant differences in the types of acquisitions and the level of program management maturity. Despite identifying lessons learned from its pilot program portfolio review, in neglecting to fully follow leading practices, OMB may have missed opportunities to make additional improvements for fiscal year 2019 portfolio reviews. Going forward, as OMB expands the portfolio reviews to other types of program areas beyond non-IT major acquisitions, it has the opportunity to develop and learn from additional pilots. Although OMB staff have not yet determined if they will do additional pilots for program management in the future, they could decide to pilot the portfolio reviews of grants that they plan to initiate in fiscal year 2020. OMB Limited Its Portfolio Reviews to Non-IT Major Acquisition Programs For fiscal year 2019, OMB directed all agencies to select portfolios of non-IT acquisition programs and align them with relevant strategic objectives as part of their internal agency strategic review processes. In spring 2019, OMB expected agencies to discuss one to two of these major-acquisition portfolio reviews during their strategic reviews with OMB. OMB expected agencies to track the cost, schedule, and performance of their selected major acquisition programs. However, OMB reports that not all agency program portfolio reviews were completed because OMB was behind in scheduling the reviews due to the partial government shutdown. According to documents we reviewed and what OMB staff told us, in October 2019 OMB completed agency program portfolio reviews with ten agencies: the Departments of Commerce, Homeland Security, Housing and Urban Development, Labor, and Transportation; the General Services Administration, the Social Security Administration, NASA, the National Science Foundation, and the US Agency for International Development. OMB staff also told us that they also held preparatory meetings with agencies to set expectations for future portfolio reviews. OMB reported that these one-on-one meetings were held with 12 agencies as of October 2019 to discuss their initial portfolio structures and other transformative initiatives. Portfolio reviews in 2020 are to expand in scope to include grants, and also will continue acquisition portfolio reviews as part of the agency’s routine management processes. However, OMB has not yet identified other program areas, such as research and development or benefit programs, to be included in future portfolio reviews. Standards for Internal Control in the Federal Government states that effective information and communication are vital for an entity to achieve its objectives. Specifically, management should externally communicate necessary quality information to achieve its objectives. Increasing communication to agencies about specific program areas, portfolio review procedures, and expectations beyond 2020 could help ensure continued progress to implement PMIAA more broadly. Furthermore, communicating such procedures with specific time frames could help agencies better direct their efforts to improve the portfolio review processes. OMB Has Not Fully Implemented an Inventory of All Federal Programs GPRAMA requires OMB to make a list of all federal programs identified by agencies publicly available, on a central government-wide website. The implementation of the program inventory is a critical tool to help decision makers better identify and manage programs across the federal government. Among other things, the completion of the program inventory would provide agencies and Congress with a comprehensive list of programs, so it would be clear how many programs agencies are managing and how they relate to their strategic objectives and portfolios of programs at each agency. Having a program inventory could also help ensure a match between the number of agency programs and needed program manager resources. Agencies continue to struggle with challenges defining their programs. Officials from three of the five selected agencies we spoke with told us that they have not yet identified all of their programs and projects. In our first report on the program inventory in October 2014, we noted similar issues. For example, agencies were not using the same program definition approach across their subcomponents or offices, which limited comparability of their own programs. We made eight recommendations in that report to the Director of OMB to update relevant guidance to help develop a more coherent picture of all federal programs and to better ensure information is useful for decision makers. As of October 2019, OMB had not taken any actions in response to the eight recommendations. While OMB has provided a timetable for action in its June 2019 A-11 guidance, this does not complete the recommendation. In September 2017, we made two recommendations to OMB to make progress on the federal program inventory. First, we recommended that OMB consider using a systematic approach for the program inventory, such as the one we developed from principles of information architecture. Information architecture—a discipline focused on organizing and structuring information—offers an approach for developing a program inventory to support a variety of uses, including increased transparency for federal programs. OMB staff told us that they considered our information architecture approach and noted that a structured information architecture format is used on USASpending.gov. However, OMB staff told us they had not yet determined how the information architecture format of USASpending.gov—which is focused on spending data—could be used to meet additional information reporting requirements and our past recommendations related to the inventory. We made a second recommendation that OMB should revise and publicly issue OMB guidance—through an update to its Circular A-11, a memorandum, or other means—to provide time frames and associated milestones for implementing the federal program inventory. As mentioned above, OMB did provide a timetable but it does not have milestones. According to the timetable, beginning with the 2021 budget cycle, agencies’ program activities will be used for the inventory’s program-level reporting requirements. This will allow OMB and agencies to present program-level spending data by leveraging what is reported on USASpending.gov as required by the DATA Act. However, OMB’s guidance does not cover other inventory information reporting requirements, or the actions we recommended in October 2014. We will continue to monitor progress. We continue to believe it is important for OMB to implement our program inventory recommendations. Such an inventory could be a critical tool to help decision makers better identify and manage fragmentation, overlap, and duplication across the federal government. Additionally, fully taking action on these recommendations would assist agencies in identifying programs, better prepare for future PMIAA portfolio reviews, and help match resources to agencies’ program management needs. Further, OMB developed three different definitions for what constitutes a “program” or “program activity” that it provided to agencies in its PMIAA, GPRAMA, and DATA Act guidance, respectively. OMB developed each of these definitions independently and in response to three different statutory requirements. OMB staff told us that these three requirements differ in their legislative intent. The definitions and their associated guidance are in the table below. OMB has not reconciled these overlapping, yet divergent, definitions of what constitutes a “program” or “program activity.” According to Standards for Internal Control in the Federal Government, management should ensure that specific terms are fully and clearly set forth so they can be easily understood. Standards for Internal Control in the Federal Government also states that management should design processes that use entities’ objectives and related risks to identify information requirements needed to achieve objectives and address risks. OMB has defined what constitutes a “program” or “program activity” in PMIAA, GPRAMA, and the DATA Act each, but its three different program definitions and approaches to determining what is a “program,” could cause confusion for agencies. Agency officials from the Department of Energy told us they are already experiencing confusion over how to appropriately apply the applicable program definition to identify their programs for PMIAA. Agency officials from Treasury told us that different definitions for programs could contribute to confusion as they work to implement PMIAA within the Department. The inconsistent approaches may increase the burden on agencies as they work to identify, maintain, and report on three sets of differently defined programs. Conversely, clarifying the definitions could help agencies and OMB identify synergies across the three laws and increase transparency. For example, providing explanations of how the term “program” or “program activity” is used across the three statutory definitions and developing a crosswalk to show similarities and differences could provide more clarity for agencies. Then, spending and performance data can be aligned with agency strategic goals, which could be monitored, reviewed, and reported in a streamlined manner. OPM Meeting Workforce Requirements of PMIAA OPM followed PMIAA requirements to create policy and guidance. Specifically, according to documents we reviewed, OPM (1) worked with subject matter experts to develop program and project management skills and competencies, (2) updated the program management 0340 job series and created guidance for identifying project management positions, (3) plans to release a career path for program and project managers by the end of calendar year 2019, and (4) plans to create a unique job identifier code that can be used to pinpoint program and project managers in any job series. These efforts will form the foundation needed by agencies to strengthen resource and talent management. Competency modeling. Since enactment of PMIAA, OPM identified skills and competencies which will be required for program and project managers. According to documents we reviewed, OPM met with subject matter experts and human capital staff in agencies to help identify the skills needed to develop the competency model. OPM also conducted a literature review looking at prior competency studies and industry practices to help identify and support program and project management competencies. OPM also drew from Project Management Institute resources, such as the Project Management Body of Knowledge and the Standard for Program Management, as part of identifying its competencies. The resulting competencies are in two categories: general and technical. General competencies focus on interpersonal or general on-the-job skills such as teamwork and problem solving. Technical competencies more narrowly focus on particular skills needed to run programs and projects, such as risk management and cost-benefit analysis. OPM documents stated that agencies will need to determine the applicability of these competencies to positions within their agency. Agencies must determine if staff meet the competencies, and if not, staff will have the opportunity to develop them or must move to a different job series, according to OPM staff. OPM staff also said additional competency assessment steps are needed to finalize the model. Agencies will be given time to consider the competency model. In addition, OPM will use subject matter expert panels to further develop the model, according to OPM documents we reviewed. Updated job series. To implement job series requirements in PMIAA, OPM staff conducted an occupational study and determined that pre- existing classification policy was sufficient for classifying program management work rather than creating a new job series classifying program management positions, according to OPM staff. Prior to OPM updating the program management 0340 job series for PMIAA, the classification standard was not developed, as it did not contain competencies describing what qualifications staff were required to meet as a program manager. In May 2019, OPM released the updated job series classification guidance designed to assist agencies in determining which employees fit in the job series. OPM also released guidance for classifying project managers to help agencies specifically identify project managers in any occupational job series. According to the memorandum sent by the Acting Director of OPM to agencies with the OPM classification guidance, agencies are required to implement the policy and guidance to covered positions by May 1, 2020. Career path. OPM staff told us that they have developed a career path for program and project managers that is currently in internal review. They said that the value of the updated career path is that it will highlight training and skills needed to progress in a program management career. According to the presentation given by OPM at the 2019 April PMPC meeting, the career path will contain: (1) a career progression outline for employees to move among and across jobs in program and project management, (2) help for employees and supervisors to plan and sequence appropriate career training and development for each general and technical competency, and (3) a list of common degrees and certifications completed by program and project managers, among other things. Staff told us they plan to release the program and project management career path for agency comment by the end of calendar year 2019. Job identifier for program managers and project managers. Because program and project managers are found in other job series outside the 0340 program management series, OPM is developing a job identifier code that can be attached to any job series for the purposes of identifying program and project managers. OPM staff told us that program managers classified to the 0340 series means that the position does not have a specialization. If the position requires specialized expertise, the position would be classified to a specialized occupational series but would also have a program management job identifier code. For example, since a grants managers is also a program manager, “grants manager (program management)” would be his or her official title. Project management positions will also use a job identifier to identify project managers in any occupational series. The job identifier will allow employees with a specialization to be designated program and project managers, while still maintaining their original career path. OPM staff told us they plan to complete this project in 2020. OPM and Agencies in Early Stages of Workforce Planning Our analysis of OPM Enterprise Human Resources Integration data shows that the 0340 job series included about 15,000 employees across all 24 CFO Act agencies in fiscal year 2018. However, OPM reported that not all employees in this job series are actually program and project managers; conversely, many program and project managers are working outside of the 0340 job series. Selected agencies reported varying degrees of difficulty identifying program and project managers. For example, NASA staff reported that they were able to identify almost all their program and project managers. In contrast, the Department of the Treasury reported that it faces challenges identifying the number of program and project managers outside of the program management job series, as this would require a resource-intensive manual effort, made more challenging by the agency’s large, complex, and decentralized structure. The Department of Energy (DOE) staff said they have not completed the count of their program managers. The Departments of Commerce and Veterans Affairs also report they do not know the number of program and project managers in their departments, respectively. The Department of Commerce staff told us that they cannot accurately identify the number of program and project managers until they can use the job identifier that they expect OPM to release in 2020. Further, Commerce officials told us they are also continuing to work to identify program managers and engaged the Project Management Institute (PMI) to request a list of those within Commerce who have the Project Management Professional (PMP) certification. PMI was able to provide Commerce details about the numbers of PMPs at Commerce, but PMI declined to share the names of those individuals with the PMP certification. In OPM’s 2018 Federal Workforce Priorities report, OPM recognizes that not all agencies have adequately analyzed workload demands, staffing levels, or current and future skills needs—all steps in workforce planning. As part of the OPM human capital framework, agencies are required to develop a human capital operating plan which is an agency’s human capital implementation document. These plans are to describe how agencies will execute the human capital strategies needed to implement the agency’s strategic plan and Annual Performance Plan (APP). Agencies are also required to include program specific strategies (e.g., hiring, closing skills gaps, etc.) in the APPs as appropriate. Effective workforce planning can help agencies focus on determining how many program and project managers they have, how many they may need, what skills gaps exist, and what training and other strategies can help address skills gaps. OPM’s workforce planning model is comprised of five steps: 1. Set strategic direction; 2. Analyze workforce, identify skills gaps, and conduct workforce 3. Develop action plan; 4. Implement action plans; and 5. Monitor, evaluate, and revise. The discussion below describes how OPM and agencies are working to strengthen the program management workforce in the context of OPM’s workforce planning model. Some activities may span more than one phase of workforce planning. Set strategic direction. The PMIAA strategic plan establishes direction for agencies to build its program management capacity and capability with its third strategy, “Strengthening Program Management Capacity to Build a Capable Program Management Workforce.” Setting strategic direction also involves linking work activities to the objectives of a strategic plan. OPM’s planned activities, such as updating the classification standards and creating a job identifier, are critical to executing this strategy so agencies can identify their workforce and build program management capacity through training, career paths, and mentorship opportunities. Analyze workforce, identify skills gaps, and conduct workforce analysis. OPM and agencies are in the early stages of identifying who their program and project managers are and what human capital strategies might be needed to address agencies’ needs. Documents we reviewed showed that OPM also worked with the Chief Human Capital Officers Council, the Chief Administrative Officers Council and others to develop competencies. These competencies provide a foundation for the subsequent assessment of program and project manager skills. Develop action plan. In their PMIAA implementation plans, some agencies have identified available training and possible recruitment and hiring strategies. In OPM’s model, agencies need to complete their workforce analysis before they can develop their action plans. Implement plan. This step is dependent on agencies developing action plans. However, OPM and agencies have already started to develop staff in the absence of plans. For example, OPM is working with agencies to identify program management training matching desired competencies to be placed in an online training repository that will be accessible to all agencies. OPM staff told us that agencies would provide the trainings from their learning management systems and offer them for interagency access. OPM is developing this training and development repository which will house agency-owned courses and also identify mentors in project and program management, according to OPM staff. OPM will house the repository on its training and development policy wiki at https://www.opm.gov/wiki/training/index.aspx. Each PMIO is to also establish a website with agency-specific program management tools and resources. Additionally, OMB recognized that the Federal Program and Project Management Community of Practice (FedPM CoP), scaled up from a community of practice housed in DOE, could be an important partner in supporting PMIAA implementation. As of April 2019, more than 1,000 managers had joined the FedPM CoP as indicated in its briefing to the PMPC. The FedPM CoP has identified several project management-related documents that are now available on the PMIAA portal. To further develop program managers, OMB is working with agencies to improve mentoring and recognition efforts. To improve mentoring government-wide, OMB reports that PMIOs will work with agency chief human capital offices to develop and implement a mentoring strategy for agency program managers. OMB also plans to take existing mentorship programs established in more functionally aligned-management fields (e.g., information technology, acquisition) and expand them to include a broader range of management career paths. To improve recognition efforts in acquisitions, the Chief Acquisition Officer Council plans to establish an annual award to recognize federal program manager excellence. Monitor, evaluate, and revise. This step cannot begin until agencies develop and implement their workforce action plans. As agencies begin to monitor their implementation of these plans, they will need to determine if any skills gaps exist in the program and project manager occupational series. OPM regulations require agencies to describe in their human capital operating plans agency-specific skills and competency gaps that must be closed through the use of agency selected human capital strategies. Agencies must also have policies and programs that monitor and address skills gaps within government wide and agency-specific mission-critical occupations. OPM has not yet determined if program and project management occupations are experiencing mission-critical skills gaps across the government, and OPM staff noted that agencies are not specifically required to report program and project manager skills gaps in their annual human capital operating plans. OMB and OPM Completed Some PMIAA Requirements Late OMB and OPM both missed statutory deadlines to fulfill requirements in PMIAA. In June 2018, OMB issued the required PMIAA agency implementation guidance in the PMIAA strategic plan, 6 months after the statutory deadline of December 2017. According to OMB staff, this delay was due to their own research project to (1) build sufficient knowledge in program and project management; and (2) increase stakeholder support in Congress and with agencies for its approach. Specifically, OMB met with experts from PMI, academics, consulting firms, federal chief senior level officer (CXO) councils, and other agency officials to increase its own understanding of program and project management principles. OMB staff told us that they used the collected information to draft initial guidance, which they then shared with congressional stakeholders and executive branch agency officials to obtain feedback and incorporate changes. OMB staff also told us that it was a transition year from one administration to another administration, and this transition was an additional factor in delaying completion of the guidance. None of the selected agencies’ staff identified an impact from the delayed guidance. OPM officials told us they missed the statutory deadline to complete their required activities after the issuance of OMB guidance. The release of the policy and guidance was due to the partial government shutdown from December 22, 2018 to January 25, 2019, along with a 3-month delay due to OPM’s own internal review and clearance process. As a result, OPM released the key skills and competencies needed for program and project management on April 5, 2019, and the classification guidance for the program manager job series 0340 and project manager interpretative guidance on May 2, 2019. OPM officials told us that agencies have 1 year from the date of issuance to comment on any language in the guidance. None of the selected agencies’ staff identified an impact from OPM’s delays, although one agency expressed concern that the pace of their efforts to identify program and project managers is dependent on OPM completing the job identifier. Figure 2 shows the delays in releasing OMB and OPM guidance. PMPC Has Met Three Times and CFO Act Agencies Have Started to Implement PMIAA Requirements OMB officials established the PMPC in 2018 and fulfilled requirements that it meet at least twice per year. By September 2018, the 24 CFO Act agencies had all appointed a PMIO and held three PMPC meetings, in September 2018, April 2019, and September 2019. Selected agenda items for these PMPC meetings included: status updates on OPM completing program and project manager competencies, job series, and career path; breakout sessions to discuss PMIAA implementation approaches with discussion of PMPC priorities and focus for 2020. At the April 2019 PMPC meeting, for example, staff from the Department of Veterans Affairs and the National Science Foundation shared some best practices, such as how to improve the tracking performance of portfolios, programs, and projects. According to OMB documents we reviewed, OMB plans to: convene the PMPC in the first quarter of each calendar year to prepare for upcoming OMB and agency strategic review meetings; use the PMPC meeting in the third quarter of the calendar year to review findings and outcomes from the most recent strategic review; update program and project management standards based on its findings and feedback at the PMPC meeting in the fourth quarter of 2020; use the PMPC to develop revised strategies, initiatives, and priorities to be reflected in an updated 5-year strategic plan at the PMPC meeting in the fourth quarter of 2021; and use the PMPC to focus on improving our high-risk areas at some future point. At the September 2019 PMPC meeting, OMB informed agencies of PMIAA implementation resources placed on OMB’s online portal for PMIAA and discussed OMB’s observations on portfolio reviews completed in 2019. One observation was the need to reinforce better visualization of performance data. In addition, OPM updated the PMPC on the status of its required PMIAA workforce efforts. The PMPC decided its primary focus for the year 2020 should be on the third strategy of the PMIAA strategic plan to build a capable workforce. Officials from the selected agencies that we interviewed provided us some suggestions on how OMB can improve the functionality of the PMPC. Table 3 illustrates the range of these suggestions: The PMPC met twice in 2019, as required by PMIAA, and has not established any working groups to help execute its significant responsibilities to share leading practices, develop standards, and help improve the workforce. Agencies have taken initial steps to incorporate requirements into program efforts. According to OMB guidance, agencies were to report in implementation plans how they are institutionalizing PMIAA efforts— especially PMIO responsibilities—into existing program and project management practices. OMB requested that agencies include 10 specific elements in their implementation plans, such as: identification of the agency PMIO, identification of major acquisition portfolios, and strategies and actions for enhancing training and improving recruitment and retention of program and project managers. These plans were due to OMB by November 30, 2018. We reviewed PMIAA draft implementation plans for 22 of the 24 CFO Act agencies and determined the extent to which agencies included the required elements in their plans. In its PMPC meeting in April 2019, OMB reported that a majority of agencies only partially included OMB requirements in their draft implementation plans. OMB told us they have not directed agencies to address missing requirements nor have they required agencies to finalize their draft implementation plans. They told us that they view the implementation plans as an opportunity for each agency to engage with OMB and discuss how they will implement PMIAA. OMB staff told us that their view is that if implementation plans provide value to agencies, they may stay in draft form and do not need to be final. Overall, draft implementation plans for these agencies provided some but not all information required to fully meet the directives from OMB. Our analysis of the plans shows that on average, agencies fully met six out of 10 requirements for their implementation plans. For example, almost all agencies met the requirements for identifying the PMIO (21 out of 22). However, 11 out of 22 agencies did not provide complete information on major acquisition portfolios. Table 4 shows how agencies’ implementation plans varied in meeting the requirements. Seven of 24 agencies reported in our questionnaire that they were creating either task forces or new or restructured offices to direct PMIAA implementation within their agencies. For example, DOE reported establishing a new office to support its PMIO. The Department of the Treasury and NASA reported creating an intra-agency cross-functional core team to discuss and design PMIAA implementation strategies. OPM reported establishing an enterprise program management office to drive the standardization of program and project management processes internally. Agencies selected PMIOs in existing leadership positions to leverage resources and agency processes to implement PMIAA. All agency PMIOs reported having additional leadership responsibilities beyond their PMIO roles. OMB documentation and information gathered from CFO Act agencies shows: every PMIO has at least one additional CXO role within its agency; thirty-eight percent of PMIOs have an additional performance management role; eight of 24 PMIOs have an additional budgetary role; and four of the 24 PMIOs have an explicit additional program or acquisition role. OMB Has Taken Limited Steps to Address Areas on Our High-Risk List In the past, we have met with senior management officials from OMB and applicable agencies to discuss where additional management attention could be beneficial to addressing high-risk areas identified on our High- Risk List. We also reported that these trilateral meetings, which began in 2007 and pre-dated PMIAA’s 2016 enactment, have continued across administrations and have been critical for progress that has been made in addressing high-risk areas. According to PMIAA, OMB’s Deputy Director of Management is to conduct annual portfolio reviews of the most at-risk agency programs, as designated by our High-Risk List. OMB officials view the trilateral meetings as their method for holding the portfolio review meetings for high-risk areas as required under PMIAA. Our High-Risk List is comprised of programs as well as functions and operations. Consequently, in our assessment of OMB’s implementation of PMIAA, we consider programs, functions, and operations on our High-Risk List as relevant for OMB’s portfolio review of areas on our High-Risk List. OMB used three strategies intended to meet PMIAA’s high-risk requirements. OMB (1) expanded its strategic reviews in 2018 to include a review of some high-risk areas, (2) continued to use the long-standing trilateral meetings to review high-risk areas with agency leaders and with us, and (3) held ad hoc meetings with agencies outside of the strategic review and trilateral meetings. OMB Discussed High-Risk Areas with Some Agencies during Strategic Review Meetings In preparation for the 2018 strategic reviews, OMB issued Memorandum M-18-15 directing agencies to provide several items in advance of their strategic review meetings with OMB. Requested items included updates from agencies on areas identified on our High-Risk List in which agencies disagreed with our recommendations or faced implementation barriers preventing progress. These materials were to be discussed during strategic review meetings. Thirteen CFO Act agencies reported submitting high-risk updates to OMB prior to these meetings, and eight agencies reported discussing their high-risk areas with OMB during the meetings. OMB guidance from June 2019, communicated in OMB’s Circular No. A- 11, did not include the statement from Memorandum M-18-15 that high- risk areas would be discussed during strategic review meetings. OMB staff felt that a broader approach could yield better results for addressing high-risk areas. Guidance in Circular No. A-11 maintained that agencies should submit updates about high-risk programs to OMB for the Deputy Director’s high-risk portfolio review, but it did not specify what should comprise agency updates about high-risk programs. Also, OMB staff told us that they requested that agencies provide topics for discussion at strategic review meetings, and that agencies could provide agenda items related to our High-Risk List. OMB staff said they addressed only a few of the high-risk issues during strategic reviews, both during the review process and the strategic review meetings. Discussions about high-risk issues during strategic review meetings generally focused on government-wide high-risk areas, if relevant, such as “Ensuring the Cybersecurity of the Nation” and “Improving the Management of Information Technology (IT) Acquisitions and Operations.” However, OMB and agencies also discussed high-risk areas in instances when agencies provided strategic review meeting agenda topics related to our High-Risk List. For example, Treasury staff told us they spoke with OMB this year about high-risk areas as part of the strategic review process. Treasury is directly responsible for the Enforcement of Tax Laws high-risk area and shares responsibility with other agencies for other high-risk areas, such as the government-wide areas on cybersecurity and strategic human capital. OMB Held Trilateral Meetings on Five of 35 High-Risk Areas OMB has held a limited number of trilateral meetings with agencies and us about high-risk areas as part of the high-risk portfolio reviews. Between March 2018 and October 2019, OMB addressed the following five high-risk areas in trilateral meetings with applicable agencies and us: 2020 Decennial Census, Managing Federal Real Property, Government-wide Personnel Security Clearance Process, Ensuring the Cybersecurity of the Nation, and NASA Acquisition Management. OMB has not held meetings to address the remaining 30 high-risk areas on our High-Risk List. OMB staff told us they plan to hold additional meetings in the next year but that they are unlikely to be able to schedule all remaining meetings within our 2-year cycle for updating the High-Risk List. OMB staff said that it is sometimes challenging to coordinate and convene trilateral meetings given the high-ranking officials who must attend and finding available times across schedules. OMB also told us that they plan to meet with agencies for all high-risk areas eventually, but that they prioritize meetings aligned with our priority areas and the President’s Management Agenda. We evaluate progress made on high-risk areas every 2 years to determine if new areas should be added to our High-Risk List and if areas on the list should be removed due to progress to address the risks. Top leadership commitment is one of the five criteria we use to assess whether progress is being made to address and ultimately remove areas from our high-risk list. As we have reported in our March 2019 High-Risk Series report, leadership commitment is the critical element for initiating and sustaining progress, and leaders provide needed support and accountability for managing risks. Leadership commitment is vital if agencies are to adequately address high-risk areas, and trilateral meetings have been critical in focusing leadership attention in the past. Because OMB officials have met on only five of 35 high-risk areas, it remains to be seen if they will meet on all high-risk areas in the future. Convening the trilateral meetings on all high-risk areas in the 2-year reporting cycle, would better position OMB to enhance the leadership commitment needed to make greater progress on the remaining high-risk areas. OMB Occasionally Discussed High-Risk Areas with Some Agencies throughout 2018 and 2019 beyond Trilateral and Strategic Review Meetings Staff from OMB said that they sometimes have briefings related to agencies’ high-risk areas separate from the annual strategic review meetings and high-risk trilateral meetings. These meetings happen on an ad hoc basis and are typically initiated by agency officials. Officials from some of our selected agencies corroborated that the discussion at the strategic review meetings and trilateral meetings is not the full extent of OMB’s interaction with agencies about high-risk areas throughout the year. For example, VA officials said that high-risk areas are frequently agenda items in meetings with OMB. NASA officials said they spoke with OMB about NASA’s high-risk areas after submitting material as part of the strategic review process. Program Management Policy Council Has Not Made Recommendations to Address High-Risk Areas The PMPC, chaired by the Deputy Director for Management of OMB, did not address our High-Risk List during its three meetings nor did it make recommendations to OMB about addressing high-risk areas, as required. The PMPC meetings have lasted 60 to 90 minutes each and the High-Risk List has not appeared as an item on any of the PMPC meeting agendas. OMB staff said PMPC meetings at this point in PMIAA implementation primarily act as forums in which agencies can share program management practices. Rather than focusing meeting time on high-risk areas, OMB staff asserted that the best use of the PMPC is primarily as a forum for agencies to share program and project management best practices. Consequently, the PMPC has not satisfied all PMPC requirements as delineated in PMIAA, including for high-risk areas to be addressed. OMB Identified Measures to Assess Results of Portfolio Reviews, but Has Been Limited by Agency Data Quality OMB Established a Prototype Dashboard to Help Track Portfolio Program Management Measures of Cost, Schedule, and Performance OMB created a dashboard to identify measures of cost, schedule, and performance that agencies should use to track their selected non-IT major acquisition programs for the first PMIAA program portfolio review. OMB partnered with the General Services Administration to complete a prototype of a dashboard to show cost, schedule, and performance data from each program or project within a portfolio of programs. The dashboard also provides a short description of each program or project and its strategic alignment to the agency’s relevant strategic goal. Staff from OMB’s Office of Federal Procurement Policy said the dashboard could provide them with some visibility and improved transparency for major acquisitions programs. According to the PMIAA strategic plan, the dashboard would display the agency portfolio and summarize performance for each item in the portfolio, similar to the portfolio reviews of IT programs required by the Federal Information Technology Acquisition Reform Act. Initially, according to OMB, it plans to request summary information for each portfolio, and restrict the dashboard to authorized government employees. Moving forward, OMB staff said that as the portfolio management process matures, a portion of the dashboard may be available to the public, similar to the IT dashboard. OMB staff told us they are in conversation with agencies about how to overcome difficulties in collecting data for the dashboard. According to OMB, the results from the pilot portfolio review showed that agencies experienced challenges with collecting high-quality data. OMB staff said there will likely be more metrics for large construction projects because management practices for them are more mature than for other types of programs, such as services. OMB is working with agencies to see how they can retrieve cost, schedule, and performance data that could provide early warning indicators of potential problems with programs. Agencies Plan a Range of Ways to Measure PMIAA Agencies reported in our questionnaire they are considering various ways to measure implementation of PMIAA. A little more than half of agencies responding to our PMIAA questionnaire provided ideas on how to measure implementation of PMIAA, such as tracking completion of their identified PMIAA milestones, developing their own survey as a baseline measure, or using their agency implementation plan outcomes to measure results. Six agencies’ questionnaire responses noted that they are planning to use existing metrics to assess program performance, either through internal processes or their annual strategic review process. For example, Treasury plans to focus in the near term on tracking completion of milestones of PMIAA implementation, such as major program and project alignment to department strategic objectives, development of an information-sharing site for program and project management resources, and workforce capabilities, among other things. VA anticipates developing outcome measures associated with successful program execution and is leveraging measures from existing plans, such as their Acquisition Human Capital Plan. OMB staff told us that they have no plans to identify measures to assess outcomes of PMIAA because it is too early and agencies are in the early stages of implementation. Rather than tracking anything specific, they told us that OMB looks at whether agencies’ PMIOs are engaged, if agencies are using training material and mentorship programs, the involvement of chief senior level officers, and if there is funding in the budget for program management certificate programs. However, OMB has not identified specific measures to track any of these areas. In collaboration with OMB, VA developed a program management maturity model survey identify capability gaps, obtain insights, and enable benchmarking of program management capabilities. It surveyed agencies’ level of maturity on a range of program management capabilities, such as talent management, governance, and portfolio management. Maturity assessment surveys can be useful tools for measuring progress to develop capacity in areas such as program management, according to subject matter specialists. Periodically measuring maturity can help agencies institutionalize continuous assessment and improvement. PMI also supports using such tools to identify trends that can help pinpoint actions needed and opportunities to learn from more mature organizations. We have found that ongoing performance measurement can serve as an early warning system to management and as a vehicle for improving accountability to the public. We have previously reported that providing baseline and trend data can help to assess an agency’s performance more fully because the data show progress over time and decision makers can use historical data to assess performance. As OMB and agencies move forward with PMIAA implementation, it will be critical to measure how agencies are maturing or building their capacity in the areas of program and project management. Such measures could include showing how OMB’s program management standards and principles are integrated into agencies’ programs and policies, the improvement of data quality used to track agency program outcomes in the program portfolio reviews, and improvement in program manager skills. Although not required by PMIAA, it is a good practice for OMB and agencies to consider ways to measure the effects of the act. Without establishing such measures to assess PMIAA outcomes, it will be challenging to gauge how agencies are making progress to identify trends, or to help agencies improve data quality. Conclusions The program and project management standards OMB developed are less detailed than accepted standards and are missing several elements that would have made them more useful. For example, the OMB standards do not provide a minimum threshold against which agencies can gauge to what extent they have met each standard. Further, OMB’s current governance structure is insufficient for further developing and maintaining program management standards. Although OMB received input from stakeholders to develop the standards and plans to update them in partnership with the PMPC in 2020, OMB does not have a governance structure that assigns roles and responsibilities to further develop, approve, maintain, or monitor standards. Having such a governance structure for managing efforts going forward could help sustain the program standards as they change over time. OMB did not follow most leading practices for designing pilots and may have missed opportunities to make improvements for fiscal year 2019 portfolio reviews. OMB has not determined if it plans to conduct additional pilot efforts. Going forward, as OMB expands the portfolio reviews to other types of program areas beyond non-IT major acquisitions, it has the opportunity to develop and learn from additional pilots. Although OMB staff have not yet determined if they will do additional pilots for program management in the future, they could decide to pilot the portfolio reviews of grants that they plan to initiate in fiscal year 2020. OMB has not identified other program areas beyond non-IT major acquisitions and grants to be included in future portfolio reviews. Communicating to agencies about specific program areas, portfolio review procedures, time frames, and expectations beyond 2020 could help agencies better direct their efforts to improve the portfolio review processes and help ensure continued progress to implement PMIAA more broadly. As of October 2019, OMB had not taken any actions in response to the recommendations in our September 2017 report and has not yet fully established an inventory of federal programs. Such an inventory of programs could be a critical tool to help agency officials identify and manage programs across the federal government. Furthermore, if OMB were to fully implement our recommendations and complete the required inventory of federal programs, it would assist agencies to match resources to agencies’ program management needs and assist agencies in preparing for future PMIAA portfolio reviews. Furthermore, OMB provides three different definitions for a “program” in its guidance for PMIAA, GPRAMA, and the DATA Act. Having different definitions of what constitutes a program could lead to confusion among agencies. It could also cause increased burden on agencies as they work to identify, maintain, and report on three sets of differently defined programs. Meetings between OMB, relevant agencies, and us have been critical for past progress on high-risk areas. However, OMB has held these trilateral meetings to address only five of 35 high-risk areas since it began implementing PMIAA. These meetings could both demonstrate and improve the commitment of agency leadership to high-risk areas across the federal government. As we have reported, leadership commitment is a key tenet in agencies’ ability to address high-risk areas. Without convening trilateral meetings on each high-risk area, OMB might miss opportunities to make progress toward addressing high-risk areas by improving leadership commitment to addressing them. The PMPC did not address our High-Risk List during its meetings nor has it made recommendations to OMB about high-risk areas. The High-Risk List has not appeared as an item on any of the PMPC meeting agendas. OMB staff asserted that the best use of the PMPC’s limited meeting time is as a forum for agencies to share program management best practices. In choosing to focus on program management practices rather than high- risk areas, the PMPC has not satisfied all PMPC requirements as delineated in PMIAA. Having measures to assess outcomes of PMIAA, such as establishing a baseline of information on programs or collecting trend data, can help OMB ensure that it has established a framework to effectively guide and assess PMIAA’s implementation. Assessment measures would also allow OMB to better target efforts to improve project management and the capabilities of managers. Recommendations for Executive Action We are making a total of eight recommendations to OMB. Specifically: The Deputy Director for Management of OMB, in conjunction with the PMPC, should develop program and project management standards to include (1) a minimum threshold for determining the extent to which agencies have met the standards, (2) how standards apply differently at the program and project levels, (3) how standards are interrelated to work in a synchronized way, and (4) how standards should be applied across the life cycle of a program or project. (Recommendation 1) The Deputy Director for Management of OMB, in conjunction with the PMPC, should create a governance structure to further develop and maintain program and project management standards that fully aligns with key practices for governance structures. (Recommendation 2) The Deputy Director for Management of OMB should, when expanding PMIAA to additional program types, design pilot efforts to follow leading practices so that OMB can optimize its efforts to improve and broaden portfolio reviews across a full range of program types. (Recommendation 3) The Deputy Director for Management of OMB should communicate program areas and timeframes, and expectations pertinent to annual program portfolio reviews, to be reviewed in future program portfolio reviews. (Recommendation 4) The Deputy Director for Management of OMB should clarify for agencies how the different definitions of a “program” relate to each other in OMB guidance. (Recommendation 5) The Deputy Director for Management of OMB should convene trilateral meetings between OMB, relevant agencies, and us for addressing all high-risk areas during each two-year high-risk cycle (Recommendation 6). The Deputy Director for Management of OMB, in conjunction with PMPC, should ensure PMPC meeting agendas include time for discussing high- risk areas during meetings and provide time for the PMPC to make recommendations to OMB about addressing high-risk areas. (Recommendation 7) The Deputy Director for Management of OMB, in conjunction with PMPC, should establish measures to assess outcomes of PMIAA, such as establishing a baseline of information on programs or collecting trend data. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this product for comment to OMB, OPM, and the five selected agencies. OMB neither agreed nor disagreed with the recommendations and stated that it would take them into consideration when making future updates to its policies and guidance for agencies for improving program and service delivery. In addition, OMB, OPM, Commerce, NASA, Treasury, and Veterans Affairs provided technical comments which we incorporated as appropriate. Energy responded that it had no comments. We are sending copies of this report to congressional committees, the Acting Director of OMB and Director of OPM, The Secretaries of the Departments of Commerce, Energy, Treasury, and Veterans Affairs, the Administrator of NASA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or Jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objective, Scope, and Methodology This report examines: (1) the steps taken by the Office of Management and Budget (OMB), the Office of Personnel Management (OPM), and the Chief Financial Officer Act of 1990 (CFO Act) agencies to implement the Program Management Improvement Accountability Act (PMIAA); (2) the extent to which OMB is using or planning to use portfolio reviews required in PMIAA to address issues on our High-Risk List; and (3) the extent to which OMB provided methods for agencies to assess the results of PMIAA. To examine the steps taken by OMB, OPM, and CFO Act agencies to implement PMIAA, we reviewed agency documents, designed and disseminated a questionnaire to the 24 CFO Act agencies, and analyzed their responses. We also selected five PMIAA CFO Act agencies as case studies. We reviewed documentation from OMB, including the OMB PMIAA strategic plan and actions taken, as well as Cross Agency Priority goal 11 quarterly reports, and screen shots of PMIAA documents on OMB Max portal. We interviewed OMB staff to gain insight into their approach to implementing PMIAA. To examine the OMB standards for program and project management, we used criteria from the Project Management Institute (PMI) for Standard for Program Management and the Project Management Body of Knowledge. In addition, we reviewed documentation from OPM regarding their PMIAA plans and documents for the update of the 0340 job series. We further analyzed Enterprise Human Resources Integration (EHRI) data from fiscal year 2018 from OPM to identify employees in current program management 0340 occupational series. We also interviewed OPM officials regarding their role in implementing PMIAA. We interviewed outside subject matter specialists to provide their views on federal program and project management. Specifically, we met with staff from PMI and Professor Janet Weiss from the University of Michigan—who had conducted a study on how to improve federal program management—as she had been recommended by the Congressional Research Service, OMB, and the IBM Center for the Business of Government. To examine the steps agencies had taken, we requested PMIAA implementation plans from all 24 CFO Act agencies. CFO Act agencies were to submit PMIAA implementation plans to OMB by November 30, 2018. We collected implementation plans between November 29, 2018, and April 16, 2019. We received 22 out of 24 implementation plans. We did not review plans from the Department of Health and Human Services or the Environmental Protection Agency because they had not completed their plans at the time of our review. Two analysts independently reviewed separate implementation plans. These reviews were then verified by another analyst. Implementation plans were evaluated on whether they fully met, partially met, or did not meet the 10 requirements provided in the OMB implementation guidance, such as how the major acquisition portfolios aligned to relevant strategic objectives, or whether the agency had existing training for program and project managers. We also disseminated a questionnaire to all CFO Act agencies to collect information on PMIAA implementation. This questionnaire was pre-tested by two CFO Act agencies and two members of the Federal Program and Project Management Community of Practice and revised for clarity. The questionnaire was sent to all 24 CFO Act agencies on February 4, 2019, and responses collected between February 11 and April 22, 2019. All 24 agencies responded to the questionnaire. Agency officials were asked questions on: 1. the steps their agency has taken to implement PMIAA, 2. the challenges their agency faces in implementing PMIAA, 3. efforts to address high-risk issues, and 4. plans to measure PMIAA outcomes, if any. We selected five agencies for case studies and analyzed further documentation and interviewed agency officials to provide illustrative examples of PMIAA implementation at the agency level. We assessed whether: agencies had responsibility for a program, function, or operation on our 2019 High-Risk List; OMB considered them further along in PMIAA implementation compared to other agencies; the agency reported it was selected for the OMB pilot of noninformation technology acquisition program portfolio reviews; agency officials reported actions taken to direct internal program management training or workforce development in their questionnaire responses or OMB required implementation plans; and agency officials reported any actions to implement PMIAA beyond the requirements listed in the OMB PMIAA strategic plan. To achieve of a range of PMIAA experiences, we selected five agencies that met varying numbers of the criteria. The Department of Commerce was chosen because all four selection criteria were met, the Department of Energy met three, the Department of Veterans Affairs met two, and the Department of the Treasury and the National Aeronautics and Space Administration each met one. We interviewed and reviewed documents from each of the agencies. We asked questions about steps agencies were taking and their interactions with OMB and OPM to help them implement PMIAA. We also asked these agencies to suggest any ways in which OMB and OPM could improve implementation. To assess the OMB PMIAA strategic plan, we reviewed leading practices on strategic planning from our body of work. We also considered testimonial evidence from OMB staff. Specifically, we reviewed prior reports on leading strategic planning practices and requirements for agencies to use in strategic planning. We selected relevant criteria from the Government Performance and Results Act of 1993 (GPRA) and the GPRA Modernization Act, that not only pertained to agency strategic plans, but also were relevant as for strategic planning principles. Specifically, we selected criteria from the following categories: (1) mission statement; (2) general goals and objectives; (3) strategies for accomplishing goals and objectives; (4) input from stakeholders; (5) interagency collaboration; 6) milestones and metrics to gauge progress. To determine the extent to which the leading practice was included in the strategic plan, we assessed documentary evidence from the PMIAA strategic plan and testimonial evidence from OMB staff as defined below: A practice was categorized as fully met if the evidence fulfilled all aspects of the definition. A practice was categorized as partially met if the evidence fulfilled some, but not all, aspects of the definition, or if the evidence was judged to fulfill the general meaning of the definition, while not technically meeting it fully. A practice was categorized as not met if no evidence was found relevant to the criterion, or if evidence did not fulfill any aspects of the definition. In addition, we reviewed documents from and interviewed selected agencies on what measures OMB was developing for evaluating PMIAA implementation. We also asked these agency officials what kinds of evaluative measures would be useful to monitor the successful implementation of PMIAA from their perspective. In addition, we assessed the pilot of the required PMIAA program portfolio reviews against the five leading practices we identified from our work on designing pilots. We determined that the design fully met the criteria when we saw evidence that all aspects of a leading practice were met. When we were unable to assess whether all aspects of a leading practice were met without additional information, we determined that the design partially met the criteria. Finally, when we saw no evidence of a leading practice, we determined that the criteria were not met. To examine OMB’s standards for program and project management, we selected two sets of criteria for program and project management criteria from PMI. PMI standards are generally recognized as leading practices for program and project management. To select program management standards, we identified 10 PMI program management activities. To select project management standards, we identified 10 project management knowledge areas. Further, PMI’s leading practices were selected to explain how program and project management standards apply differently, and how both set of standards relate to the lifecycle of a program or project. We then compared the definition of these 10 PMI program and 10 PMI project management standards to the definition of OMB’s initial 15 program and project standards released for PMIAA implementation. In addition, OMB’s initial standards were compared to PMI leading practices that distinguish the relationship between programs and projects and leading practices on applying standards across the life cycle of a program or project. We also applied leading practices we identified from our previous work on data governance standards to assess the governance process OMB used to develop, maintain, and monitor program management standards. Our past work identified common key practices for establishing effective data governance structures. This work selected a range of organizations, including domestic and international standards-setting organizations, industry groups or associations, and federal agencies, to ensure we had comprehensive perspectives of data governance key practices across several domains. Two analysts compared the five key practices on the data governance structures to OMB plans and documented practices. We assessed the reliability of OPM’s EHRI data through electronic testing to identify missing data, out of range values, and logical inconsistencies for employees classified as 0340s. We believe the EHRI data we used are sufficiently reliable for the purpose of this report. To examine the extent to which OMB is using or planning to use portfolio reviews to address our High-Risk-List, we reviewed documentation from OMB and 24 CFO Act Agencies. As part of our questionnaire, we asked 24 CFO Act agencies to provide any of our High-Risk List summary and detailed analyses that the agencies were required to submit to OMB as part of the 2018 strategic review process. We analyzed this information to determine the extent to which agencies provided information to OMB during their 2018 strategic review process. We also selected criteria from the Standards for Internal Control in the Federal Government on maintaining documentation of the internal control system to assess steps that OMB had taken related to its responsibilities for conducting high-risk portfolio reviews and the management of the Program Management Policy Council. Specifically, we selected information and communication which states that management should externally communicate the necessary quality information that an entity needs to achieve its objectives. We conducted this performance audit from June 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Yvonne D. Jones, (202) 512-6806, or jonesy@gao.gov. Staff Acknowledgments In addition to the contact named above, William Reinsberg (Assistant Director), Carole J. Cimitile (Analyst in Charge), Jacqueline Chapin, Martin J. De Alteriis, Emily Gamelin, Jaeyung Kim, Matthew L. McKnight, Robert Robinson, Dylan Stagner, Andrew J. Stephens, and John Villecco made key contributions to this report.
Why GAO Did This Study PMIAA requires OMB to adopt program management standards and guidelines government-wide; OPM is to establish new—or revise existing—occupational standards for program and project management. PMIAA includes a provision for GAO, no later than 3 years after the enactment of the act, to issue a report examining the implementation and effectiveness of certain provisions of the act on federal program and project management. This report (1) describes steps taken by OMB, OPM, and agencies to implement PMIAA; (2) assesses OMB's efforts to address issues on GAO's High-Risk List using PMIAA; and (3) examines the extent to which OMB provided methods for agencies to measure and assess the results of PMIAA. GAO reviewed documents from and conducted interviews with OMB and OPM. GAO surveyed all 24 CFO Act agencies, and selected five agencies to illustrate implementation efforts. GAO also interviewed subject matter specialists from academia and the private sector regarding their views on how program and project management practices applied to PMIAA. What GAO Found The Office of Management and Budget (OMB) has begun to implement all requirements of the Program Management Improvement Accountabilitiy Act of 2016 (PMIAA), but further efforts are needed to fully implement the law. OMB released its 5-year strategic plan for PMIAA and developed program management standards. However, the standards are not detailed compared with accepted program and project management standards, and OMB's governance structure is insufficient for developing and maintaining these standards over time. In 2019, OMB conducted ten reviews of agency program portfolios—organized groupings of programs whose coordination in implementation enables agencies to achieve their objectives. Each review addressed one or two portfolios per agency. Further, OMB's required portfolio reviews of high-risk areas were limited to only five out of 35 areas on GAO's High-Risk List. OMB could establish measures to track agencies' progress. Although not required by PMIAA, this is a good practice for demonstrating improvement. As required by PMIAA, the Office of Personnel Management (OPM) developed competencies for program and project managers and updated the program management job series. Further, OPM is developing a career path for program and project managers by the end of 2019. OPM also plans to create a unique job identifier code in 2020 so that agencies can more completely identify their program management workforce. The Program Management Policy Council (PMPC), established by PMIAA and chaired by OMB's Deputy Director for Management, met for the first time in September 2018 and met twice in 2019 to discuss PMIAA implementation with Chief Financial Officers (CFO) Act agencies. All CFO Act agencies designated a Program Management Improvement Officer to participate in the PMPC. However, the PMPC has neither addressed GAO high-risk areas nor advised OMB on how to address high-risk areas, as required by the PMIAA. What GAO Recommends GAO is making eight recommendations that OMB further develop the standards to include more detail, create a governance structure for program management standards, hold meetings on all High-Risk List areas, and establish measures to track agencies' progress in program management. OMB neither agreed nor disagreed with the recommendations and stated that it would consider them when making future updates to its program management policies and guidance.
gao_GAO-19-582
gao_GAO-19-582_0
Background BSA/AML Framework FinCEN oversees the administration of the Bank Secrecy Act and related AML regulations, and has authority to enforce BSA, including through civil money penalties. FinCEN issues regulations and interpretive guidance, provides outreach to regulated industries, conducts examinations, supports select examinations performed by federal and state agencies, and pursues civil enforcement actions when warranted. FinCEN’s other responsibilities include collecting, analyzing, and disseminating information received from covered institutions, and identifying and communicating financial crime trends and methods. See figure 1 for federal supervisory agencies involved in the BSA/AML framework. FinCEN primarily relies on supervisory agencies and other entities to conduct examinations of U.S. financial institutions to determine compliance with BSA/AML requirements (see table 1). FinCEN delegated BSA/AML examination authority to these supervisory agencies, including the banking regulators, SEC, CFTC, and IRS. IRS has been delegated authority to examine certain financial institutions (such as money services businesses) not examined by the federal functional regulators for BSA compliance. The SROs that SEC and CFTC oversee—such as FINRA and NFA respectively—have BSA/AML compliance responsibilities for the activities of their members. Apart from their delegated examination authority under the BSA, the federal functional regulators and SROs have their own regulatory authority to examine institutions they supervise for compliance with BSA. FinCEN, the banking regulators, and SEC may assess civil money penalties for BSA violations and take enforcement actions for noncompliance. The SROs have established BSA-related rules or requirements for their members based on federal requirements and may take disciplinary actions against them for violations of these rules. IRS issues letters of noncompliance to institutions it oversees and generally relies on FinCEN for formal civil enforcement action, but IRS-CI has the authority to investigate criminal violations. Other law enforcement agencies (for example, DOJ Criminal Division, FBI, and ICE- HSI) also can conduct criminal investigations of BSA violations. More generally, law enforcement agencies and prosecutors may review and start investigations into a variety of criminal matters based on BSA reporting filed in their areas of jurisdiction. According to FinCEN, BSA recordkeeping and reporting requirements establish a financial trail for law enforcement investigators to follow as they track criminals, their activities, and their assets. Finally, DOJ prosecutes financial institutions and individuals for violations of federal criminal money laundering statutes. BSA/AML Requirements U.S. financial institutions can assist government agencies in the detection and prevention of money laundering and terrorist financing by complying with BSA/AML requirements such as maintaining effective internal controls and reporting suspicious financial activities. BSA regulations include recordkeeping and reporting requirements, such as to keep records of cash purchases of negotiable instruments, file CTRs on cash transactions exceeding $10,000, and file SARs when institutions suspect money laundering, tax evasion, or other criminal activities. Law enforcement agencies and prosecutors (through FinCEN) may utilize the 314(a) program to locate accounts and transaction information from U.S. financial institutions when terrorism or money laundering activity is reasonably suspected based on credible evidence. Most financial institutions must develop, administer, and maintain effective AML programs. At a minimum, those financial institutions must establish a system of internal controls to ensure ongoing compliance with the BSA and its implementing regulations; provide AML compliance training for appropriate personnel; provide for independent testing; and designate a person or persons responsible for coordinating and monitoring day-to-day compliance. In addition to these requirements, FinCEN issued a final rule in 2016 requiring banks, brokers or dealers in securities, mutual funds, futures commission merchants, and introducing brokers in commodities to establish risk-based procedures for conducting customer due diligence. More specifically, covered financial institutions are to establish and maintain written policies and procedures designed to (1) identify and verify the identity of customers; (2) identify and verify the identity of the beneficial owners of legal entity customers opening accounts; (3) understand the nature and purpose of customer relationships to develop customer risk profiles; and (4) conduct ongoing monitoring to identify and report suspicious transactions and, on a risk basis, maintain and update customer information. For example, covered financial institutions must collect from the customer the name, birthdate, address, and Social Security number or equivalent of any beneficial owners. The financial institutions covered by this rule—which do not include money services businesses, casinos, or insurance companies—had until May 11, 2018, to comply. BSA Examination Manuals and Procedures Supervisory agencies and SROs oversee financial institutions’ compliance with BSA/AML requirements primarily through compliance examinations, which, for banking regulators, can be components of regularly scheduled safety and soundness examinations. All supervisory agencies and SROs we interviewed that examine financial institutions for BSA/AML compliance have established BSA/AML examination manuals or procedures. For example, to ensure consistency in the application of BSA requirements, in 2008 FinCEN issued a BSA examination manual for use in reviewing money services businesses, including for IRS and state regulators. According to FinCEN officials, FinCEN has been updating the entire manual and completed a draft of the update in the fourth quarter of fiscal year 2018, with the goal of finalizing the updated manual by the end of fiscal year 2019. Similarly, in 2005 the federal banking regulators collaborated with FinCEN on a BSA/AML examination manual issued by the Federal Financial Institutions Examination Council (FFIEC). The entire FFIEC manual has been revised several times since its release (most recently in 2014). In May 2018, FFIEC also issued new examination procedures to address the implementation of the 2016 customer due diligence and beneficial ownership rule, discussed earlier. These updated customer due diligence examination procedures replaced the existing chapter in the FFIEC BSA/AML examination manual and added a new section “Beneficial Ownership Requirements for Legal Entity Customers—Overview and Examination Procedures.” In addition, the FFIEC has been working on an update of the entire FFIEC manual, which is expected to be complete by the end of the calendar year 2019 or early 2020. SEC and FINRA, as well as CFTC’s respective SROs, have nonpublic procedures for conducting examinations of the institutions they oversee. SEC, FINRA, and NFA officials all stated that they have updated procedures to address the new customer due diligence regulations that were applicable beginning in May 2018. We discuss examination activities of the supervisory agencies in more detail later in this report. FinCEN and Supervisory Agencies Consider Risk, Among Other Factors, in Examination and Enforcement Approaches FinCEN and Supervisory Agencies Consider Risk and Size of Institutions in BSA/AML Examination Approaches FinCEN and supervisory agencies consider risk when planning BSA/AML examinations and all utilized BSA data to some extent to scope and plan examinations (see table 2). As we reported in prior work, BSA/AML examinations are risk-based—examiners have the flexibility to apply the appropriate level of scrutiny to business lines that pose a higher level of risk to the institution. Covered financial institutions are expected to complete a BSA/AML risk assessment to identify specific products, services, and customers, which supervisory agencies can use to evaluate the compliance programs of financial institutions and scope their examinations. Most officials from supervisory agencies and SROs said they also consider asset size, among other factors, to determine examination frequency and scope. For example, the federal banking regulators implemented less frequent examination cycles for smaller, well-capitalized financial institutions. FinCEN is the administrator of BSA and delegated BSA/AML examination authority to the supervisory agencies. FinCEN officials told us they have been considering how regulators of financial institutions of different size and risk assess BSA/AML compliance and continue to work with federal regulators to identify better ways to supervise examinations. For example, in a February 2019 speech, the Director of FinCEN stated that one of FinCEN’s current regulatory reform initiatives was reviewing the risk- based approach to the examination process. Although supervisory agencies with delegated authority conducted the vast majority of BSA/AML compliance examinations, FinCEN has conducted a few of its own examinations in areas it considers a high priority. FinCEN officials told us it mostly considers risk (not size) when conducting its own examinations because even small institutions could pose money laundering risk. FinCEN states that it uses an intelligence- driven approach to target examinations in high-risk areas. For example, FinCEN officials told us they have conducted BSA/AML compliance examinations of financial institutions on issues such as virtual currencies and data breaches in domestic branches of foreign banks. In an August 2018 speech, the Director of FinCEN noted that FinCEN, working closely with BSA examiners at IRS, had examined more than 30 percent of identified registered virtual currency exchangers and administrators since 2014—totaling about 30 examinations, according to FinCEN officials. FinCEN officials said they conducted a total of five BSA/AML examinations with IRS in fiscal years 2017 and 2018. In addition, FinCEN conducted a BSA/AML examination in fiscal year 2018 of a branch of a foreign bank that had been previously examined by its banking regulator to review the effectiveness of the bank’s BSA compliance department. Banking Regulators All of the banking regulators with which we spoke stated they considered risk and, to some extent, asset size to determine examination frequency and scope. The FFIEC BSA/AML examination manual establishes a risk- based approach for bank examinations, including incorporating a review of BSA/AML risk assessments of a financial institution in the scoping and planning of an examination. In considering asset size to determine the frequency of examinations, all of the banking regulators adopted rules to reduce the frequency of examinations for small, well-capitalized financial institutions—as seen in table 2. In addition, in their annual reports to FinCEN the banking regulators provide a description of the criteria used for determining the timing and scope of BSA/AML examinations, such as risk and asset size. For instance, FDIC and the Federal Reserve noted in their annual reports to FinCEN that the timing and scope of their BSA/AML examinations are primarily determined by an institution’s BSA/AML risk profile and factors such as its condition, overall rating, and asset size. OCC, in its annual report, said that examination scope included consideration of the bank’s BSA/AML risk assessment, quality of validated independent testing (internal and external audit), previous examination reports, BSA reports, and other relevant factors, including data from the OCC’s Money Laundering Risk System. OCC officials said the system identifies potential indicators of BSA/AML risk by measuring the extent to which various types of products, services, customers, and geographies are offered or served by supervised banks. For banks that report into that system, OCC officials said they factor information from the system into developing an examination strategy that helps determine resource allocation and expertise needs. According to NCUA, each credit union must receive a BSA examination each examination cycle—although the frequency and scope of these examinations may vary based on the credit union’s size and other risk factors. For example, small credit unions with assets under $50 million may be subject to a defined-scope examination (which includes a BSA examination) where the risk areas have already been identified and the scope is pre-determined. NCUA also provides a BSA questionnaire that is publicly accessible to assist its examiners in implementing BSA examinations (for example, to help examiners assess the BSA risk of the credit union and scope the examination). Factors considered in the questionnaire include prior violations, correspondence from law enforcement related to BSA compliance, whether or not the credit union conducted a risk-assessment, and high-risk accounts. While the FFIEC BSA/AML examination manual and other federal banking documentation discuss considering BSA/AML risk when determining the scope and frequency of examinations, officials from all four banking associations with whom we spoke said, in practice, examiners do not always use a risk-based approach when assessing BSA compliance. Nearly all said examiners may take a zero-tolerance approach when conducting examinations. For example, representatives from two industry associations said that although failure to file a single SAR or unintentional errors should be treated differently than egregious, intentional noncompliance, or a pattern of negligence (in terms of level of noncompliance), that sometimes has not been the case. Federal Reserve officials noted that each examination is specific to the facts and circumstances of that examination and that systemic deficiencies in a bank’s BSA/AML compliance program are generally treated differently than nonsystemic deficiencies. As discussed earlier, FFIEC has been working on updating its entire FFIEC BSA/AML examination manual, including updates to more clearly state the agencies’ approach to risk-based supervision, according to OCC officials. Representatives from two of the four banking associations with which we spoke with stated they were involved in providing input on recent updates to FFIEC’s examination manual and all four had provided input to the effort to implement the customer due diligence and beneficial ownership rule. For example, OCC officials said that the risk-based approach is most clearly discussed in the opening pages of the current FFIEC manual and could be more directly incorporated throughout the manual to provide enhanced guidance to examiners. These officials stated that the agencies have been drafting proposed edits for drafting group consideration. More generally, FFIEC undertook its Examination Modernization Project as a follow-up to reviews required under the Economic Growth and Regulatory Paperwork Reduction Act. One of the project’s efforts seeks feedback from selected supervised institutions and examiners on ways to improve the examination process. For example, the FFIEC examination modernization project reviewed, compared, and identified common principles and processes for risk-focusing examinations of community financial institutions. FFIEC members also committed to issue reinforcing and clarifying examiner guidance on these risk-focused examination principles. In addition, Treasury, FinCEN, and the banking regulators established a working group to identify ways to improve the efficiency and effectiveness of BSA/AML regulations and supervision. In October 2018, the working group issued a joint statement to address instances in which banks with less complex operations and lower-risk BSA/AML profiles may decide to enter into collaborative arrangements with other banks to share resources to manage their BSA/AML obligations in order to increase efficiency and reduce burden. In December 2018, the working group issued another joint statement that recognized that banks may use existing tools in new ways or adopt new technologies to more effectively and efficiently meet their BSA/AML obligations. Securities Regulators SEC shares responsibility for broker-dealer examinations with SROs, but has sole responsibility for examinations of mutual fund companies and maintains supervisory authority over SROs. SEC’s Office of Compliance Inspections and Examinations conducts risk-based examinations of regulated entities including mutual funds (under the Investment Adviser/Investment Company Examination Program) and broker-dealers (under the Broker-Dealer Exchange Examination Program). According to SEC documentation, the scope of examinations is based on a risk assessment of various factors such as the type of business a firm engages in and its customer base. This includes consideration of whether the firm engages in high-risk activities. The Office of Compliance Inspections and Examinations assesses the risks from information sources such as tips, complaints and referrals, FinCEN BSA data, pre- examination due diligence, and previous examination history. During the period we reviewed, BSA/AML examinations of mutual funds accounted for less than 1 percent of all securities BSA/AML examinations and no mutual funds were cited for violations of BSA. SEC staff said investors primarily purchase shares of mutual funds through a distributor (such as a broker-dealer) and, in these cases, mutual funds do not know, and are not required to know, the identities of individual investors. In these cases, the broker-dealer distributor has more information about the individual investors and may be examined for BSA compliance as part of FINRA and SEC BSA examinations. FINRA conducts the majority of examinations of broker-dealer firms and imposes anti-money laundering rules on its members. FINRA officials told us that they use a risk-based approach for AML examinations, which considers the size, complexity, customer types, and risks posed by business activities in assessing potential BSA/AML risk. These risk factors affect the timing of their reviews (for example, if a broker-dealer is deemed to be higher-risk, it will be examined in the same year it was assessed). According to FINRA officials they have different expectations for firms’ AML programs, based on size (larger firms typically are expected to have more complex AML programs than smaller firms). FINRA publishes a template for small firms to help them fulfill their responsibilities to establish an AML compliance program. The template provides text examples, instructions, relevant rules, websites, and other resources useful for plan development. However, representatives from a securities industry association told us that BSA/AML rulemaking and examinations sometimes do not take into account the varying levels of risk of different types of business models and activities among firms. Furthermore, these representatives stated that sometimes compliance expectations are communicated through enforcement actions rather than through rulemaking or guidance. As noted previously, one of FinCEN’s has been reviewing the risk-based approach to the examination process. According to a February 2019 speech by the Director of FinCEN, FinCEN’s initiatives also included reviewing agencies’ approach to supervision and enforcement and identifying better ways to communicate priorities. Representatives from this securities industry association also identified certain training and tools on BSA/AML compliance and implementation that FINRA and SEC staff provide as helpful to the securities industry in identifying priorities and compliance deficiencies. For example, SEC’s Office of Compliance Inspections and Examinations and FINRA publish annual examination priorities, which identified both customer due diligence and suspicious activity monitoring as key areas for 2019. According to SEC staff, SEC and FINRA examination priorities have identified suspicious activity monitoring as a key area for the past several years and have identified customer due diligence as a priority since the implementation of the customer due diligence rule in 2018. FINRA published examination findings for the first time in 2017 and again in 2018, including selected findings related to BSA/AML compliance, which representatives from the industry association said have been very useful because they describe specific BSA/AML compliance deficiencies identified by FINRA across the industry and can assist firms in improving their compliance programs. Additionally, FINRA and SEC included an AML-topic in their 2017 National Compliance Outreach Program for broker-dealers. SEC also occasionally publishes risk alerts on its website and participates in industry outreach efforts. Futures Regulators The SROs that conduct the majority of examinations of futures firms use a risk-based approach. CFTC has authority to examine futures commission merchants and futures and commodities introducing brokers, but does not routinely conduct examinations of the firms it supervises. Instead, CFTC oversees the examinations conducted by its SROs. CFTC delegated examination authority to two SROs—NFA and the CME Group. NFA conducts the majority of BSA examinations and is the only SRO that examines independent introducing brokers. During the period we reviewed, NFA was assigned the majority of futures firms and conducted a majority of AML examinations. NFA and CME Group stated in CFTC’s annual reports to FinCEN that they utilize a risk-based approach for AML examinations. CME Group reported that it determined both the frequency and the scope of examinations through an overall assessment of the financial and operational risks posed by a futures commission merchant. NFA is required to examine futures commission merchants annually, but reported that the timing and frequency of introducing broker examinations were based predominately on the risks present with a firm. NFA’s risk models measure the riskiness of each firm, and firms are prioritized for examination based on the output from the risk model. In an interpretative notice, NFA recognized that financial institutions vary in size and complexity, and that firms should consider size, among other factors (such as the nature of business and its risks to money laundering) in designing a program to implement requirements such as customer verification and suspicious activity reporting. Representatives from a futures industry association told us that there is a one-size-fits-all approach to BSA/AML compliance in that the rules are broadly applied to varying types of financial institutions. They noted that BSA/AML guidance tends to focus on banks and treat other types of financial institutions (money service business, casinos, and others) as one group, despite their diversity. In relation to the futures industry, the representatives stated that this makes it difficult for futures commission merchants to implement requirements because the rules or guidance do not necessarily take into consideration their unique business structure. CFTC staff told us that BSA requirements could be applied differently to different types of firms and were supportive of tailoring or reducing requirements where the obligations were duplicative, overly burdensome, and BSA-related risks were low. For example, CFTC staff recommended that FinCEN relieve (1) certain introducing brokers known as voice brokers and (2) futures commission merchants that are initial clearing firms from customer identification program requirements because they have limited interaction with the customer and do not have access to customer information that would allow them to perform customer due diligence. CFTC staff told us they have been working with FinCEN on implementing these recommendations. In July 2019, FinCEN issued additional guidance on the application of the customer identification program rule and the beneficial ownership rule to certain introducing brokers, which stated that an introducing broker that has neither customers nor accounts as defined under the customer identification program rule has no obligations under that rule or the beneficial ownership rule. Internal Revenue Service IRS examination staff use a risk-based approach to examine for BSA/AML compliance. In 2008, FinCEN and IRS issued a manual for use by IRS (and state regulator) examiners who perform risk-based examinations of money services businesses, which are a category of nonbank financial institutions. The BSA/AML manual for money services businesses states that examiners should determine the appropriate depth and scope of the examination procedures based on their assessment of the risks of the businesses. Specifically, the manual also states examiners should scope their examinations based on their assessment of the risks, which they can assess by analyzing information including the business’ BSA/AML risk assessment and AML compliance program, and then conduct selective transaction testing to determine if the AML program is effective. The amount of transaction testing will vary based on the assessed level of risk—the amount of testing would be reduced if the examiner determined the risks were minimal. IRS officials said that IRS examiners do not perform scheduled examinations of all money services businesses every year; rather, they review a percentage of businesses each year based on risk-related factors such as a history of noncompliance, high-risk geographic areas, and financial institutions identified by referrals. Thus, there may be some money services businesses that are not examined for years and some that are examined much more frequently. As discussed earlier, FinCEN has been updating the BSA/AML Manual for money services businesses. According to the manual, IRS examiners should consider size, among other things, as a factor in their examination approach. IRS officials with whom we spoke said that smaller money transmitters may not have the resources or understand monitoring methods necessary to implementing BSA/AML compliance programs such as suspicious activity monitoring and reporting. IRS procedures state that it is the responsibility of BSA examiners to ensure the financial institution is informed of reporting, registration, recordkeeping, and compliance program requirements of the BSA. IRS officials further explained that they share methods of detecting suspicious activity with small money transmitters to help them meet their requirements. Enforcement Approaches of Supervisory Agencies Include Informal, Formal, and Joint Actions FinCEN enforcement actions can be based on sources that include referrals from examining authorities, information from financial institutions, interviews, and leads from law enforcement. Supervisory agencies, including the federal banking regulators, SEC, CFTC, and their respective SROs are to promptly notify FinCEN of any significant potential BSA violations. IRS also makes referrals to FinCEN for violations it identifies in its BSA examinations, such as willful violations of AML program requirements and recordkeeping and reporting regulations and structuring. Additionally, financial institutions can self-report violations, DOJ or other law enforcement agencies may provide leads, and FinCEN personnel can refer potential violations to FinCEN’s Enforcement Division to be investigated. According to FinCEN officials, after receiving a referral FinCEN’s Enforcement Division opens a case in the Financial Intelligence Repository System, and Enforcement Division staff and management evaluate the circumstances of the alleged violation and provide a written recommendation for action. FinCEN generally resolves its referrals through one of three ways: (1) closing the case without contacting the subject of the referral, (2) issuing a letter of warning or caution to the subject institution or individual, or (3) assessing a civil monetary penalty. According to FinCEN officials, management in the Enforcement Division approve which action will be taken to close the referral, and if the recommendation is to pursue some type of civil enforcement action—the Director of FinCEN and the Office of Chief Counsel would be involved in that determination. FinCEN officials said that factors the Enforcement Division considers when determining which action to recommend or take include: any impact or harm to FinCEN’s mission by identified violations; pervasiveness of the violations; the gravity and duration of the violations; the institution’s history of violations; continuation of the activity; possible obstruction or concealment; any remedial actions taken by institution; and whether the institution received financial gain or benefit from violation. According to FinCEN officials, the Enforcement Division maintains an administrative record for all cases that result in an enforcement action, and when the action is complete, the Financial Intelligence Repository System is updated to reflect that the referral is closed. From January 1, 2015, to September 25, 2018, FinCEN received 419 referrals directly from supervisory agencies (see table 3). Two reports have noted some issues associated with referrals to FinCEN, including delays in reporting by an agency and inconsistent status updates from FinCEN to agencies. A 2018 report by the Treasury Inspector General for Tax Administration found FinCEN had long delays in processing IRS referrals and assessed penalties on a small proportion of referrals. For example, 49 of 80 cases referred by IRS during fiscal years 2014–2016 remained open as of December 31, 2017, and FinCEN assessed penalties in six of the 80 referrals. In response, FinCEN management said the primary reason for not processing referrals was the “age” of the violations when the referral was made to FinCEN, which according to FinCEN officials impedes a thorough investigation of the violations due to an imminent expiration of the applicable statute of limitations. The report recommended that IRS consider having its FinCEN referral process reviewed by process experts to make it more efficient because delays in submitting cases to FinCEN could lead to FinCEN taking longer to process referrals or not considering cases for further civil penalty. In response to the recommendation, IRS stated that it completed a process improvement review of its FinCEN referral process, and had since updated its internal guidelines (in February 2019) to reflect the improved procedures. The Office of Inspector General of Treasury reported in 2016, among other findings, that several federal and state regulators told it that FinCEN did not routinely inform them of the status of their referred cases. The Office of Inspector General recommended that FinCEN implement a process to periodically notify federal and state regulators of the status of and actions taken on referred cases. In its response, FinCEN agreed with the recommendation, and stated that it follows its standard operating procedures for case processing. FinCEN’s response stated that its case processing procedures provide that in all FinCEN enforcement actions taken in coordination with other government partners (including other regulators), FinCEN’s Enforcement Division will provide regulators with a copy of FinCEN’s consent order that details the violations, factual findings, and proposed settlement terms. FinCEN also noted that its Enforcement Division holds standing and ad hoc meetings with each of its federal regulatory partners to discuss, among other matters, the status of top priority referrals. Treasury’s Office of Inspector General closed the recommendation based on FinCEN’s response and its review of FinCEN’s standard operating procedures—which it said included procedures to provide regulators with a copy of FinCEN’s approved consent order and proposed settlement terms in the case of formal enforcement actions. FinCEN officials also told us that FinCEN has been working to update and finalize its policies and procedures to further address the recommendation from Treasury’s Office of Inspector General, but did not have a time frame for completion. When FinCEN assesses a penalty for BSA violations, it may do so independently or concurrently with supervisory agencies. In a concurrent action, FinCEN will assess a penalty with the other regulator and has sometimes deemed the penalty (or a portion of its penalty) satisfied by a payment to the regulator. FinCEN took 26 enforcement actions over the period we reviewed (from fiscal year 2015 through the second quarter of fiscal year 2018), five of which were concurrent with supervisory agencies. Casinos, depository institutions, and money services businesses each had eight enforcement actions and a precious metals firm and a securities/futures firm had one each. In December 2018, FinCEN assessed a $14.5 million civil monetary penalty against UBS Financial Services, $5 million of which was paid to Treasury and the remainder satisfied by payment of penalties for similar or related conduct imposed by SEC and FINRA. Banking Regulators Federal banking regulators identify and cite violations of BSA/AML requirements as part of the supervision process, including the examination process. The regulators employ progressive enforcement regimes to address supervisory concerns that arise during the examination cycle or through other supervisory activities. If the institution does not respond to the concern in a timely manner, the regulators may take informal or formal enforcement action, depending on the severity of the circumstances. Informal enforcement actions include obtaining an institution’s commitment to implement corrective measures under a memorandum of understanding. Formal enforcement actions include issuance of a cease-and-desist order or assessment of a monetary penalty, among others. Some factors that the banking regulators reported considering when determining whether to raise an informal enforcement action to a formal enforcement action include the severity of the weakness and the bank’s commitment to correct the identified deficiencies. See appendix II for recent data on enforcement actions taken by the banking regulators. Securities Regulators All SEC enforcement actions and all SRO disciplinary actions are public. SEC has authority to enforce compliance with BSA for mutual funds and broker-dealers. If SEC examiners find significant deficiencies with a firm’s BSA program, the examiners may refer the finding to SEC’s Division of Enforcement or an SRO for enforcement. In addition, SEC’s BSA Review Group in the Division of Enforcement’s Office of Market Intelligence may refer matters identified through the review of BSA reports to staff in SEC’s Division of Enforcement and in the Office of Compliance Inspections and Examinations for further consideration and potential follow-up. SEC’s Division of Enforcement will assess whether to proceed with an investigation, determine whether a violation has occurred, and if so, whether an enforcement action should be recommended against the firm or any individuals. In certain cases, SEC’s Division of Enforcement may undertake an investigation where there has been a widespread or systemic failure to file SARs or systemic omission of material information from SARs. When making this assessment, SEC staff said SEC considers a number of factors including: the egregiousness of the conduct, the length of time over which the violations occurred, number of SARs that were not filed or that omitted material information, the disciplinary history of the firm, and adherence to any internal policies and procedures. FINRA has enforcement authority that includes the ability to fine, suspend, or bar brokers and firms from the industry and has two separate procedures (settlement and formal complaint) through which it applies enforcement actions. Through a settlement, a firm or broker in violation can offer to settle with FINRA through a Letter of Acceptance, Waiver, and Consent. A formal complaint is filed with and heard before FINRA’s Office of Hearing Officers. See appendix II for recent data on enforcement actions taken by SEC and FINRA. Futures Regulators Although CFTC delegated examination authority to NFA and the CME Group, it retained authority to pursue enforcement actions against futures firms. While CFTC does not typically conduct BSA/AML examinations, it does have a BSA review team that reviews SARs to identify potential violations of futures laws, and CFTC has taken enforcement actions based on leads developed from SARs reviewed. SROs generally conduct BSA examinations of futures firms, and at the conclusion of an examination, the SROs will issue a report to the futures firm to notify the firm of any deficiencies in its AML program. If the deficiencies are not significant, NFA officials stated NFA will cite the deficiency in the examination report and close the examination with no disciplinary action but require corrective action before closing it. If examination findings are significant, then NFA may issue a warning letter or recommend that its Business Conduct Committee issue a formal complaint charging the firm with violating NFA’s AML requirements (which is an enforcement action). NFA officials told us it resolves most enforcement actions related to violations of NFA’s BSA/AML rules through settlement agreements that assess a fine. NFA may take other types of actions for violations of their rules, such as suspension of membership or expulsion. See appendix II for recent data on informal and formal actions SROs took. Internal Revenue Service Although FinCEN has delegated authority to IRS to conduct civil BSA/AML examinations for a variety of nonbank financial institutions and individuals, IRS does not have authority to enforce most civil BSA violations identified. If IRS Small Business/Self-Employed Division examiners find BSA violations when examining an institution, the division can send a letter of noncompliance—a letter 1112—with a summary of examination findings and recommendations to the institution, which also includes an acceptance statement for the institution to sign. Additionally, if IRS Small Business/Self-Employed Division examiners identify significant civil violations during a BSA/AML examination, such as willful violations of BSA reporting and record-keeping requirements, they may refer civil violations to FinCEN or refer certain violations of potential criminal activity to IRS-CI. See appendix II for recent data, including the number of institutions issued a letter 1112. FinCEN, Supervisory Agencies and Law Enforcement Established Collaborative Mechanisms, but the Futures Industry Has Been Less Represented In recent years, Treasury and FinCEN have led efforts to identify BSA goals and priorities such as issuing a national strategy and risk assessments for combating illicit financing crimes. They also established key mechanisms for BSA/AML collaboration, such as interagency working groups, information-sharing agreements, and liaison positions that encompass multiple federal, state, and local agencies and private-sector participants. However, these key mechanisms have been less inclusive of the futures industry than other financial sectors. Treasury and FinCEN Led Efforts to Identify BSA Goals and Priorities Treasury and FinCEN led collaborative efforts to identify BSA goals and priorities, including the following: National Strategy. In December 2018, Treasury issued the National Strategy for Combating Terrorist and Other Illicit Financing as required by 2017 legislation. The national strategy discussed various agencies’ BSA-related goals and objectives, including those of the supervisory agencies and law enforcement groups with which we spoke for our review. It also laid out key priorities, such as protecting the United States from terrorist attacks, simplifying the BSA regulatory framework to work more effectively and efficiently, and ensuring the stability of domestic and global markets by reducing fraud, money laundering, and other economic crimes. The strategy also discussed interagency coordination and information-sharing mechanisms (including public-private information sharing). For example, the national strategy states that FBI provided a classified briefing twice a year to selected personnel from the 20 largest financial institutions in the United States to share information on terrorist financing trends. In addition, the national strategy provided data on prosecutions related to money laundering. For example, in fiscal years 2015–2017, DOJ annually charged on average 2,257 defendants with money laundering. Risk assessments. Congress also directed Treasury and relevant agencies to evaluate the effectiveness of existing efforts that address the highest level of risks associated with illicit finance. In December 2018, Treasury issued three risk assessments that identified money laundering, terrorist financing, and proliferation financing risks and describe Treasury’s and relevant agencies’ efforts to address these risks. The three risk assessments underpin the 2018 National Strategy. Treasury involved multiple agencies in the development of the risk assessments, including supervisory agencies, SROs, and several law enforcement agencies. The terrorist financing and money laundering risk assessments built on previous Treasury-led risk assessments issued in 2015, but the 2018 proliferation financing risk assessment was the first ever issued. Treasury’s Strategic Plan (2018–2022) and other guidance. Prior to the publication of the National Strategy, Treasury issued a strategic plan in February 2018 that identified strategies, goals, measures, and indicators of success to meet its strategic goal for preventing terrorists and other illicit actors from using the U.S. and international financial systems. FinCEN also issued advisories or guidance that identify BSA and law enforcement priorities. For example, in February 2014 FinCEN issued guidance that clarified how financial institutions should align their BSA reports to meet federal and state law enforcement priorities if the institutions provide services to marijuana-related businesses. The related federal and state law enforcement priorities included preventing the proceeds of marijuana sales from going to criminal enterprises, gangs, and cartels. Two industry associations (with which we spoke before the issuance of the December 2018 national strategy and risk assessments) noted the importance of establishing BSA priorities to better inform industry. For example officials from one industry association said that Treasury’s risk assessments identified priorities and suggested that it produce these types of reports more frequently (for example annually). This may be addressed, in part, by Congress’ requirement that the national strategy— including a discussion on goals, objectives, and priorities—be updated in 2020 and 2022. In addition, Treasury has been conducting a broad review of BSA/AML laws, regulations, and supervision—focusing on how effectively current requirements and related activities achieve the underlying goals of the BSA. Key Mechanisms for Collaboration Involve FinCEN, Supervisory Agencies, and Law Enforcement Interagency working groups, interagency memorandums of understanding, and liaison positions, as shown in table 4, are key BSA/AML collaborative mechanisms that were identified through our interviews with officials from FinCEN, supervisory agencies and law enforcement agencies and in agency documents. Bank Secrecy Act Advisory Group (BSAAG). Congress directed Treasury to create BSAAG in 1992. The group, led by FinCEN, is the primary and longest-established BSA/AML collaboration mechanism and is used to share information and receive feedback on BSA administration. The advisory group meets twice a year and includes working groups on BSA/AML-related issues that may meet more frequently. BSAAG recently has been focusing on improving the effectiveness and efficiency of the regulatory and supervisory regime. SEC and Federal Reserve officials told us that BSAAG is a helpful and effective collaborative mechanism to discuss BSA/AML issues. However, as we discuss later, representatives from CFTC, the primary futures SRO, and a futures industry association expressed concerns that the futures industry was not as well represented on BSAAG as other industries. FinCEN invites the public to nominate financial institutions and trade groups for 3-year membership terms on BSAAG. In making selections, the Director of FinCEN retains discretion on all membership decisions and seeks to complement current BSAAG members in terms of affiliations, industry, and geographic representation. Memorandums of understanding (MOU). FinCEN established interagency agreements—information-sharing and data-access MOUs— relating to BSA data. For example, FinCEN entered into an information- sharing MOU with the federal banking regulators in 2004 and has since established similar MOUs with other supervisory agencies, including many state supervisory agencies. FinCEN consolidates the data from the four federal banking regulators and told us that it shares the consolidated reports with banking regulators. In addition, FinCEN officials told us they use data from the information-sharing agreements to help in certain initiatives and training. For example, FinCEN officials told us that a recently funded initiative focused on nonbank financial institutions will use information from the MOUs to proactively identify risks and better inform related compliance efforts. All the supervisory agencies told us they informally update and monitor their information-sharing MOUs through frequent meetings and regular communication with FinCEN. For example, FinCEN officials told us they have been working to update how they collect information on violations related to the customer due diligence and beneficial ownership rule. In addition, FinCEN contracts an annual MOU satisfaction survey that FinCEN officials said helps them monitor the effectiveness of the MOUs. In the survey, respondents were asked about their satisfaction with their MOU and scored their satisfaction around 80 out of 100 in 2017. FinCEN also has more than 400 data-access MOUs with federal, state, and local law enforcement agencies as well as with federal and state regulatory agencies. FinCEN has data-access MOUs, or provides direct data access, with or to all the federal supervisory agencies and with FINRA, a securities SRO—but not with NFA, a futures SRO. As discussed previously, supervisory agencies use these data primarily to help scope and conduct their BSA/AML compliance examinations. In a later section, we discuss access issues in relation to supervision of the futures industry. Law enforcement agencies use BSA data to assist in ongoing investigations and when initiating new investigations. Liaison positions. FinCEN has used on-site liaison positions for more than a decade to help avoid overlap and duplication of efforts. According to FinCEN officials, as of April 2019, FinCEN had 18 law enforcement liaisons from 10 law enforcement agencies. Some law enforcement officials with which we spoke said the liaison position allowed feedback and information exchange between law enforcement and FinCEN. Supervisory agencies generally told us that the liaison program was for law enforcement agencies and that they did not participate. FinCEN officials said that while FinCEN does not have on-site liaisons from supervisory agencies that are comparable in scope to the law enforcement liaisons, they work closely with the supervisory agencies. For example, FinCEN currently has a part-time detailee from FDIC who collaborates on-site at FinCEN with FinCEN analysts. FinCEN officials said they hosted a temporary on-site detailee from NCUA in 2017. NCUA officials told us that they also expressed an interest to FinCEN to implement routine detailing of staff. SEC staff told us that in the past they had a FinCEN detailee onsite working with SEC’s Division of Enforcement, which allowed SEC to better understand FinCEN’s methodology and approaches, and assess their own approaches to BSA enforcement. SEC staff expressed interest in hosting another FinCEN detailee, and the agency has been considering a FinCEN request to send an SEC liaison to FinCEN. There are also other BSA/AML collaborative mechanisms among regulatory or law enforcement agencies, such as the FFIEC BSA/AML working group, SAR review teams, and geographic targeting orders (see table 4). We also obtained perspectives on collaboration from FinCEN and relevant key law enforcement and regulatory agencies on three selected BSA criminal cases, which are discussed in appendix III. Futures Industry Not Consistently Included in BSAAG and its Key SRO Does Not Have a Data- Access MOU with FinCEN The futures industry has been less represented in key mechanisms for BSA/AML collaboration (those related to BSAAG and data-access agreements) than other industries. Representatives from CFTC, the primary futures industry SRO, and a futures industry association expressed concerns that the futures industry was not as well represented on BSAAG as other industries. CFTC, as the delegated supervisory agency, always has been a member of BSAAG. However, the primary futures industry SRO—which has developed rules to implement AML requirements for its members and conducts a majority of AML examinations of futures firms—and futures industry associations have had less consistent participation. Officials from the primary futures SRO expressed concern that they were not a regular member of BSAAG. They noted that they were a BSAAG member in the mid-2000s but then not selected as a member of BSAAG for almost 5 years (from 2014) until they were invited to be a member again in March 2018, at which point, the futures industry association’s BSAAG membership was not renewed when its term expired. Representatives from all key federal supervisory agencies have been regular members of BSAAG. In particular, the securities industry, which also uses SROs to monitor BSA compliance, has had its primary SRO as a member of BSAAG since 2008. Representatives from the primary securities SRO said that their participation in BSAAG allowed them to coordinate BSA/AML efforts. Representatives from the primary futures SRO said that their role regarding oversight of the futures industry was similar to the primary securities SRO. These representatives stated that they adopted AML rules; were the only SRO with jurisdiction over all futures entities subject to AML requirements; and conducted a majority of AML examinations. Accordingly, representatives said that they were in the unique position of seeing first-hand how AML requirements are implemented in the futures industry and identifying issues, as well as potential gaps in implementation. CFTC staff said that all significant representative groups for the futures industry should participate in BSAAG—in particular, the primary futures SRO because it supervises all types of registered firms in the futures industry and the leading industry association for the futures, options, and centrally cleared derivatives markets. In addition, representatives from industry associations we spoke with from other industries have been regular members of BSAAG including banking associations and the primary securities industry association. The primary securities industry association has been a member since 2008, concurrent with the primary securities SRO (also a member since 2008). Representatives from this association said that BSAAG is a mechanism that FinCEN uses to solicit feedback from the industry. Officials from the futures industry association that had previously participated in BSAAG, told us that their current lack of participation may prevent FinCEN from obtaining an in-depth understanding of futures industry issues and may prevent the futures industry from obtaining information on BSA/AML goals and priorities and other key communications. CFTC staff said that in addition to the primary futures SRO, BSAAG also should include a primary industry association. FinCEN officials told us that there is a limit on the number of BSAAG representatives allowed and that they have had a futures representative that was not always an active participant. In addition, FinCEN officials said that when selecting BSAAG members they need to consider the top money laundering risk areas as well as the appropriate number of members to have productive discussions. They added that because membership rotates, additional futures representatives could be added based on needs and topic areas. Furthermore, FinCEN officials told us that although the most recent BSAAG (October 2018) did not include a futures industry association, it did include the primary futures industry SRO and six large diversified financial firms that are listed as members of the key futures industry association. However, these firms represent a small percentage of the association’s membership and are not smaller firms or clearing organizations, exchanges, and global and regional executing brokers. As noted in Treasury’s 2018 national strategy, BSAAG is the main AML information conduit and policy coordination mechanism among regulators, law enforcement, and industry and has been focusing on improving the effectiveness and efficiency of the regulatory and supervisory regime. Without regular participation by the primary futures SRO that has developed AML rules and conducts the majority of BSA examinations for the futures industry, FinCEN may be missing opportunities to better understand compliance in the futures industry and the SRO may not be fully up to date on BSA/AML compliance issues and related initiatives that may affect the AML rules it develops. Furthermore, without representation on BSAAG by the key futures industry association, the diverse array of futures industry participants may not be fully represented, informed, or updated on key BSA/AML information. Standards for Internal Control in the Federal Government state that management should externally communicate the necessary quality information to achieve the entity’s objective. In addition, the statutory purpose of BSAAG includes informing private-sector representatives of how BSA reports have been used and receiving advice on how reporting requirements should be modified. Additional futures industry representation on BSAAG could enhance both regulator and industry awareness of BSA/AML compliance issues and potential money laundering risks. In addition, NFA, the SRO conducting the majority of BSA examinations for the futures’ industry, does not have direct access to BSA data—unlike all key supervisory agencies and FINRA. In our 2009 report, we recommended that FinCEN expand data-access MOUs to SROs conducting BSA examinations that did not already have direct access to BSA data. In 2014, FinCEN completed a data access MOU with FINRA. But it did not pursue an MOU with NFA because, at that time, CFTC did not ask FinCEN to arrange an MOU with NFA. However, CFTC staff, as of April 2019, told us that access to BSA data would enhance the tools that NFA has to perform its functions, including its ability to scope and perform BSA/AML examinations, and to use BSA data more extensively and more frequently. Currently, when conducting its examinations, NFA must obtain SAR information from CFTC, as well as reviewing SARs provided by a firm while conducting an on-site examination. FinCEN officials told us that NFA has not requested direct access to BSA data. However, NFA representatives told us they welcomed a discussion with CFTC and FinCEN on the benefits and drawbacks of having direct access to BSA data. FinCEN officials said they would need to better understand any negative impacts of NFA not having direct access and NFA would need to meet the required criteria to obtain direct access. Standards for Internal Control in the Federal Government state that management should externally communicate the necessary quality information to achieve the entity’s objectives. Supervisory agencies with direct data access all have utilized BSA data to some extent to scope and plan examinations. Direct access to BSA data would enhance NFA’s ability to scope BSA examinations and generally conduct its oversight responsibilities for BSA in the futures industry. Metrics and Feedback to Industry on the Usefulness of BSA Reporting Were Not Consistently or Widely Provided FinCEN and two law enforcement agencies with which we spoke generated metrics on the usefulness of BSA reporting—such as the number of BSA reports that led to new investigations. But FinCEN, whose role it is to collect and disseminate BSA data, has not consistently communicated these metrics—instead only communicating some available metrics on an ad-hoc basis through methods such as published speeches or congressional testimonies. FinCEN and nearly all the law enforcement agencies with which we spoke provided some feedback to financial institutions on how to make BSA reports more useful through formal mechanisms (such as conferences and training sessions) and informal relationships. However, institution-specific feedback, which all industry groups said their members preferred, has not been widely provided. Available Metrics on Usefulness of BSA Reporting Not Consistently Communicated Two of the six law enforcement agencies (IRS-CI and FBI) we interviewed produced metrics on the usefulness of BSA reporting (for example, percentage of investigations utilizing BSA data). However, FinCEN (which has statutory responsibilities for the central collection, analysis, and dissemination of BSA data) did not consistently communicate this information, but rather communicated on an ad hoc basis through published speeches or congressional testimony. IRS-CI annually publishes a report with data on investigations, including those generated by BSA reports. For example, in fiscal year 2018, IRS-CI reported that 515 BSA investigations were initiated (see table 5). FinCEN’s website generally did not refer to IRS-CI metrics, but in a November 2018 congressional testimony, the Director of FinCEN included information on the percentage of IRS-CI investigations that began with a BSA source— 24 percent in fiscal year 2017. In addition, IRS-CI also tracks the work of SAR review teams and has created some metrics on the usefulness of BSA reporting, including: the number of investigations initiated, indictments, convictions, sentenced, and total dollars seized based on the work of the SAR review teams (see table 6). While this information is not routinely reported publicly, IRS officials said they have shared information about results from SAR review teams’ during presentations to the public, law enforcement, and financial industries. FBI analyzes BSA filings to support existing cases and initiate new investigations, and FBI and FinCEN have reported related metrics to the public, but not routinely. FBI created a BSA Alert System that searches subjects’ names, dates of birth, Social Security numbers, telephone numbers, email addresses, and other identifying information across BSA filings, and automatically emails the results to agents. In a November 2018 congressional testimony, the FBI section chief of its Criminal Investigative Division stated that these searches produce an average of 2,000 alerts per month and provided statistics on the results of the agency’s use of BSA data. From January 2017 to June 2018, BSA reporting was directly linked to the main subject of approximately 25 percent of pending FBI investigations (up from 8.9 percent in 2012). The November 2018 FBI testimony also described FBI’s use of SARs data analysis to identify new cases. For example, FBI analysts run a series of search terms and criteria related to money laundering, terrorist financing, human trafficking, fraud, corruption, transnational organized crime, and other schemes against SAR filings. The persons identified through the searches are automatically searched against FBI case files and watchlist data, and the results incorporated into reports to appropriate field offices. FinCEN also communicated some of the FBI metrics in an August 2018 speech by the FinCEN director. For example, the director said more than 20 percent of FBI investigations utilized BSA data and for some types of crime, like organized crime, nearly 60 percent of FBI investigations used BSA data. The other four law enforcement agencies with which we spoke did not generate metrics on the usefulness of BSA reporting due to confidentiality or data reliability concerns, among other reasons, but some tracked other BSA-related efforts. DHS officials said that while they do not have any metrics on the usefulness of BSA reports, the agency provided data on the usefulness of ICE-HSI’s Cornerstone outreach program—in which ICE-HSI provided training to financial institutions on issues such as trends in how criminals earn, move, and store illicit proceeds. ICE-HSI reported that in fiscal year 2017, based on the Cornerstone outreach program, special agents initiated more than 72 financial investigations, made 55 criminal arrests, and seized almost $2 million in illicit proceeds. Secret Service officials said that they have been trying to develop an internal tracking system for their use of BSA reports, but were not tracking any metrics as of April 2019. They told us that they use BSA data for investigative purposes only and they do not discuss or report it, because they consider it confidential information—thus making it difficult for them to gather metrics on the use of BSA reports. An official from DOJ’s Criminal Division said that the division has not established any performance measures or collected any statistics that measure the effectiveness of BSA record-keeping and reporting requirements (for example, because the success of investigations depending on multiple factors not just BSA reporting, and other challenges described later in this report). However, the official said that the division recognizes the usefulness of BSA data in criminal investigations because the data help them with prosecutions of crimes. Officials from DOJ Executive Office for United States Attorneys said that they track the number of cases with statutory provisions relating to BSA in which the U.S. Attorney’s Offices prosecuted or enforced BSA violations. However, the officials said their case management system does not track if BSA filings were used to initiate or assist the case. Supervisory agencies we interviewed generally said FinCEN and law enforcement are better positioned to compile metrics on the usefulness of BSA reporting because FinCEN and law enforcement agencies are the primary users of BSA reports. However, two of the seven supervisory agencies in our review that also have law enforcement functions—SEC and CFTC—have their own BSA review teams, which analyze SARs to identify potential violations of federal laws, including BSA violations, and refer matters for further examination or investigation as appropriate. For example, on average, from fiscal years 2016 to 2018, SEC’s BSA review team reviewed about 27,000 SARs each year that related to current or potential investigative matters, or entities regulated by SEC. CFTC staff told us they review an estimated 7,500–8,000 SARs annually. On average, in about 100 instances a year, CFTC’s BSA review team refers SARs to investigative teams in support of new or existing investigations. As of December 2018, CFTC staff said they had taken 33 enforcement actions based on leads developed from SARs, with two of the actions related to BSA/AML violations. FinCEN collected some metrics on the usefulness of BSA data through annual surveys and other initiatives; however, the survey results are not public and other metrics are not regularly published. FinCEN contracts an annual survey that includes questions to BSA data users (such as federal and state law enforcement and regulators) about the usefulness of BSA data to, among other things, provide new information or supplement known information or identify new leads or investigations. BSA data users are asked to score the value and impact of BSA data and scored it at about 80 out of 100 for both 2016 and 2017. FinCEN contracts another survey that solicits feedback on the 314(a) program. The 2017 survey found the respondents that utilized the 314(a) program gave it high scores for its usefulness—close to 90 out of 100 for 2016, and 2017. The results from both surveys are not publicly available. In addition, FinCEN periodically publishes a 314(a) Fact Sheet that contains some data on the usefulness of the 314(a) program—such as the number of 314(a) requests and the percentage of requests that contributed to arrests or indictments. Based on information FinCEN collected from law enforcement, approximately 95 percent of 314(a) requests contributed to arrests or indictments. In addition, FinCEN reported the number of cases submitted and related subjects of interest identified in 314(a) requests for each 2-week period from January 5, 2016, to January 29, 2019. For example, for the 2-week period starting on January 29, 2019, 16 requests resulted in 162 subjects of interest. FinCEN contracted a study on the value of BSA reporting—which began in January 2019 and is to be completed by the end of 2019—with the goal of identifying common attributes of BSA value among stakeholders; assessing how to use available data to establish metrics for evaluating and calculating the value of BSA; identifying gaps in data and other information needed to measure the value of BSA reporting; and proposing actions to improve FinCEN’s ability to identify, track , and measure the value of BSA reporting. However, the performance work statement for FinCEN’s BSA value study, which outlines the objectives for the study, does not include actions related to communicating such metrics. As discussed above, FinCEN has not consistently communicated available metrics. FinCEN officials told us their current approach was to communicate metrics through mechanisms such as speeches and congressional testimonies. FinCEN officials told us that it has an ongoing initiative to create a new communication strategy incorporating the results of the BSA value study—but had no time frame for its completion. Our prior work found that agencies can implement a number of practices that can enhance or facilitate the use of performance information— including communicating performance information frequently and routinely. In addition, Standards for Internal Control in the Federal Government state that management should externally communicate the necessary quality information to achieve the entity’s objectives. Officials from some supervisory agencies and most industry associations also told us they would like FinCEN to provide them with more aggregated data on the usefulness of SARs filed by financial institutions. By consistently communicating currently available metrics on the usefulness of BSA reporting to industry, and any metrics later identified by FinCEN’s BSA value study, financial institutions may be able to more fully understand the importance and outcomes of their efforts. FinCEN and Law Enforcement Have Provided Some Feedback to Financial Institutions on Improving BSA Reporting but Only Periodically and on a Small Scale FinCEN and nearly all of the law enforcement agencies with which we spoke provided some feedback to financial institutions on how to make BSA reports more useful through formal mechanisms (such as conferences and training sessions) and informal relationships. However, institution-specific feedback, which all industry groups said their members preferred, has not been provided on a regular basis and only on a small scale. Types of Feedback Mechanisms FinCEN’s feedback mechanisms include a new information exchange program, advisories, and BSAAG. For example: FinCEN Exchange. On December 4, 2017, FinCEN publicly launched the FinCEN Exchange, a public-private program that brings together law enforcement, FinCEN, and different types of financial institutions to share information to help identify vulnerabilities and disrupt money laundering, terrorist financing, and other financial crimes. As of December 2018, FinCEN convened more than a dozen briefings with law enforcement agencies across the country, involving more than 40 financial institutions. According to Treasury’s 2018 national strategy, the information provided by financial institutions through SARs after the briefings helped FinCEN map and target weapons proliferators, sophisticated global money laundering operations, human trafficking and smuggling rings, and corruption and trade-based money laundering networks, among others. FinCEN officials told us that these exchanges provide a forum in which law enforcement can request specific information and provide information on typologies to financial institutions that allows financial institutions to improve their BSA monitoring and reporting. FinCEN advisories. FinCEN issues public and nonpublic advisories to financial institutions to help financial institutions better detect and report suspicious activity related to a particular risk and related typology. For example, in October 2018 FinCEN posted an advisory on its website to alert U.S. financial institutions of the increasing risk that proceeds of political corruption from Nicaragua might enter the U.S. financial system. It also posted an advisory on the Iranian regime’s illicit activities and attempts to exploit the financial system. These advisories included specific instructions on how to file SARs related to this type of suspicious activity. Some of the industry associations with which we spoke had positive feedback on FinCEN advisories and said they would like to see more red flags and specific guidance to help improve their BSA monitoring programs. BSAAG. Among its functions, the advisory group serves as a forum for industry, supervisory agencies, and law enforcement to communicate about how law enforcement uses SARs and other BSA data. For example, sometimes law enforcement agencies present specific cases using BSA data or information on money laundering and terrorist financing threats. Many of the industry associations and supervisory agencies with which we spoke cited BSAAG as a useful feedback mechanism. As discussed previously, the advisory group is only open to those invited and not a public forum, so not all financial institutions receive or can provide feedback at these meetings. Law enforcement awards. FinCEN officials said that annual law enforcement awards ceremonies are one of the mechanisms they use to provide financial institutions with feedback on the usefulness or effectiveness of BSA/AML information. The award ceremonies highlight successful cases utilizing BSA data. FinCEN officials told us that FinCEN also sends thank you letters to the selected financial institutions that provided the underlying financial data used in the awarded cases, publishes overviews of the cases for which law enforcement agencies received awards, and documents nominated cases. FinCEN issues press releases about the winning cases as another way to share information with financial institutions. Outreach events. FinCEN representatives regularly have participated in outreach events about BSA/AML issues, such as by sharing information at BSA/AML conferences. According to FinCEN officials, the conferences allow FinCEN representatives to both formally (speeches, presentations) and informally (personal interactions) solicit and offer feedback on how financial institutions can improve BSA reporting. Additionally, Treasury reported that its Office of Terrorism and Financial Intelligence regularly engages public and private-sector practitioners and leaders, both domestic and international, on money laundering and terrorist financing issues. For example, the office convenes multilateral and bilateral public-private sector dialogues with key jurisdictions and regions to discuss mutual anti-money laundering and counter-terrorist financing issues of concern. Representatives from nearly all of the federal law enforcement agencies we interviewed said that they conducted outreach events and developed relationships with financial institutions to solicit and provide feedback on their BSA reports including providing feedback on ways to improve BSA reporting and to enhance BSA compliance by financial institutions. Conferences. Law enforcement agencies have presented at conferences on BSA/AML topics and host conferences for financial institutions. For example, for more than a decade ICE-HSI, FBI, Secret Service, IRS-CI, and the Drug Enforcement Administration jointly have hosted an annual conference that includes speakers from law enforcement, supervisory agencies, FinCEN, and financial institutions. According to an ICE-HSI official, the intent of the conference is to educate the private financial sector. FBI officials also said they conduct outreach, such as hosting and participating in conferences, and said that this type of outreach reached more than 6,000 people in the last year (as of August 2018). Briefings and financial institution-specific training. Some law enforcement agencies have their own outreach programs on BSA topics for financial institutions. For example, ICE-HSI has the Cornerstone Outreach Program that began to work with the private sector in 2003 to identify money laundering vulnerabilities in the financial system. The program is to encourage partnerships with the private sector by sharing distinguishing traits or forms of criminal behavior (either crime-centered or person-centered) and methods, and providing training to financial institutions. ICE-HSI officials said they conducted about 300 Cornerstone Outreach presentations in fiscal year 2018. FBI officials also told us they host a couple of meetings annually for financial institutions and sometimes conduct institution-specific training upon request, such as on SAR usefulness. FBI officials told us that for the institution-specific SAR trainings, they change the information on the SARs for training purposes and highlight how institutions can improve SAR filings. They also provide some summary-level statistics and work with the financial institution’s SAR teams to train them on trends. They estimated they conduct from about eight to 10 such sessions annually (as of April 2019). Informal relationships with financial institutions. Officials from nearly all the law enforcement agencies with whom we spoke said they have informal relationships with financial institutions to solicit and provide feedback on their BSA reports. Most supervisory agencies we interviewed said that they did not provide feedback to financial institutions on the usefulness of their BSA reporting due to factors such as law enforcement being better positioned to provide feedback and SAR confidentiality restrictions. However, CFTC staff noted that their BSA review team communicates the general usefulness of SARs filed by their institutions at conferences and through telephone contacts with the filer after the relevant case is filed. SEC staff told us they do not reach out directly to provide financial institutions specific feedback on the usefulness of SARs, but provide training on what makes a good or bad SAR through routine interaction with the primary securities industry association and presentations at BSAAG. As discussed earlier, some supervisory agencies regard FinCEN and law enforcement as the primary end users of BSA reports, and thus, in a better position to provide feedback to financial institutions on BSA reporting. Additionally, many supervisory agencies told us that it would be helpful if FinCEN and law enforcement could provide more frequent or systematic feedback on financial institutions’ SAR reporting. Limitations of Feedback Mechanisms Some supervisory agencies, industry associations, and law enforcement agencies with which we spoke identified limitations with some of FinCEN’s feedback mechanisms, including FinCEN Exchange and law enforcement awards. Representatives from all the industry associations we spoke with indicated that financial institutions would like to see more institution-specific feedback on their SARs to improve their monitoring systems and reporting. FinCEN Exchange. Some industry associations appreciated FinCEN’s outreach, but noted that the new FinCEN Exchange program was on a small-scale and industry associations had not been invited to participate or provide feedback. An official from one industry association said that the association could help identify banks, such as community banks, that could be a good fit for the program. Supervisory agencies also generally said they were not involved in the FinCEN Exchange program. Officials from OCC said that they would like to be involved because they are the primary regulator for many of the financial institutions in the program and thought their participation would add value. Some law enforcement agencies had some concerns about the FinCEN Exchange program, such as private-sector representatives not being properly vetted or the risk of talking about ongoing investigations. For example, officials from ICE-HSI and FBI told us their institution-specific training included only vetted or trusted financial institutions. FinCEN officials said that they collaborated with regulators on the FinCEN Exchange and solicited feedback on the program from certain industry associations. In addition, FinCEN posts frequently asked questions about the FinCEN Exchange program on its website and encourages feedback from financial institutions on how they can support FinCEN priorities such as information sharing. FinCEN officials said that the FinCEN Exchange is an invitation-based program and that FinCEN vets information received from financial institutions and consults with law enforcement, as appropriate, to convene a briefing. Furthermore, FinCEN’s frequently asked questions about the program note that financial institutions that voluntarily participate in a FinCEN Exchange briefing must adhere to the terms noted in FinCEN’s invitation, including any requirement of confidentiality given the sensitivity of information provided. Awards. Representatives from CFTC, FBI, and three industry associations with whom we spoke made suggestions for expanding FinCEN’s law enforcement awards and related thank you letter initiatives. For example, CFTC suggested that FinCEN expand the awards program to include civil cases as well as criminal cases. FinCEN officials also told us in April 2019 that they were considering awards for civil cases. Industry associations generally said their member financial institutions appreciated receiving thank you letters, but some noted that there were limitations with these letters. For example, a representative from one industry association said that only a small percentage of financial institutions receive the awards, and representatives from another industry association said that the letters should provide more specific feedback. Two other industry associations said that the confidential nature of SARs makes it difficult to share the success of the financial institution that submitted the reporting. Many law enforcement agencies with which we spoke said that the law enforcement awards were a good idea, and FBI officials recommended creating awards for the financial institutions as well. FinCEN officials stated that due to SAR confidentiality rules, it cannot publicize awards to financial institutions. Institution-specific feedback. Representatives from all the industry associations with whom we spoke told us, or have publically stated that financial institutions would like to see more institution-specific feedback on their SARs to improve their monitoring systems and reporting. SAR reporting is labor-intensive for financial institutions because it requires researching and drafting narratives for a SAR filing and justifying cases where a SAR is not filed, according to many industry association representatives. However, many representatives said that financial institutions get little institution-specific feedback on their SAR reporting. We found that while law enforcement conducts some small group briefings that industry associations said were useful, these briefings cover a small number of financial institutions in relation to the size of the U.S. financial industry. ICE-HSI stated that it conducted 302 institution-specific trainings and briefings in fiscal year 2018, and FBI, as discussed previously, estimated it has conducted from about eight to 10 institution- specific SAR reporting trainings annually in relation to the more than 10,000 depository institutions, more than 26,000 money services businesses registered with FinCEN, and almost 4,000 active broker- dealers registered (as of January 2019). The American Bankers Association, Independent Community Bankers of America, and The Clearing House all have issued papers—recommending more institution- specific feedback on financial institution SAR reporting. Some industry associations and other stakeholders pointed to international efforts that provided feedback through public-private partnerships. For example, the United Kingdom’s Joint Money Laundering Intelligence Taskforce (joint task force), formally established in May 2016, includes regulators, law enforcement, and more than 40 major United Kingdom and international banks conducting a large proportion of financial activity in the United Kingdom (89 percent of the volume of personal accounts in the United Kingdom). The joint task force has a system in place to routinely convene these partners, included vetted banking representatives, to set AML priorities and share intelligence. According to the intergovernmental Financial Action Task Force’s (FATF) mutual evaluation report of the United Kingdom, financial institutions involved in the joint task force are required to file SARs for suspicious activity identified through the program, and these SARs are considered to be of high value. FATF’s report also noted that the joint task force is considered to be best practice in public-private information sharing. According to Treasury’s 2018 national strategy, FinCEN collaborated with the United Kingdom’s joint task force in implementing the FinCEN Exchange program. In prior work, we reported that FinCEN recognized that financial institutions do not generally see the beneficial impacts of their BSA/AML efforts. FinCEN, law enforcement, and some industry associations with which we spoke identified challenges in providing institution-specific feedback to financial institutions on the usefulness of their BSA reporting. In addition to the large number of financial institutions in the United States, officials from FinCEN and law enforcement agencies told us that law enforcement cases may be sensitive and time-consuming, and the unauthorized disclosure of SARs or sharing of certain information with financial institutions might compromise ongoing investigations. Two industry associations also identified the confidential nature of SARs as a challenge for FinCEN and law enforcement to provide institution-specific feedback to financial institutions. As we have discussed, FinCEN has been undertaking a study to better understand the value and effectiveness of BSA. In addition, FinCEN and some law enforcement agencies have made efforts to provide some institution-specific feedback through various methods on BSA reporting, but the feedback has been periodic, sometimes only at the request of financial institutions, and provided on a small scale. FATF standards on information sharing state that anti-money laundering authorities should provide feedback to financial institutions to assist them in complying with anti-money laundering requirements—these mechanisms can include feedback loops, whereby more consistent and more fully explained feedback is provided to the private sector on suspicious transaction reports. FinCEN’s statutory duties also include information sharing with financial institutions in the interest of detection, prevention, and prosecution of terrorism, organized crime, money laundering, and other financial crimes. As discussed, other countries have put in place mechanisms (such as the United Kingdom’s joint task force) to provide regular feedback on AML reporting (including SAR-like instruments) to financial institutions representing a large portion of the country’s financial activity. Additional and more regular institution-specific feedback, designed to cover different types of financial institutions and those with significant financial activity, may enhance the U.S. financial industry’s ability to effectively target its efforts to identify suspicious activity and provide quality BSA reporting. Conclusions FinCEN, numerous supervisory agencies (covering various financial sectors), and law enforcement agencies are responsible for enforcing the BSA/AML regulatory framework with the end goal of detecting and preventing money laundering and other financial crimes. While these agencies have processes and mechanisms in place to collaborate on key BSA/AML issues, such collaboration and information sharing could be enhanced by additional and more regular involvement of representatives of the futures industry—a complex and unique financial markets sector. Unlike the other key federal supervisory agencies and securities SRO involved in BSA compliance, the primary futures SRO was not consistently included in BSAAG. Thus, FinCEN may be missing opportunities to better understand compliance in the futures industry and the SRO may not be updated on related BSAAG initiatives. The key futures industry association also has had less consistent participation in BSAAG, and although it has been a member of BSAAG in the past, it was not a member concurrently with the futures SRO—thereby, potentially missing opportunities to engage FinCEN and other agencies on BSA issues in futures markets. In addition, by providing NFA with direct access to BSA data (similar to the access the key securities SRO already has) FinCEN could facilitate NFA oversight and enable it to scope examinations proactively to address BSA risks. Some federal agencies have taken steps to provide metrics and institution-specific feedback on the usefulness of BSA reporting to industry; however, metrics were not provided regularly and feedback efforts were provided on a small scale. Additionally, challenges to expanding and enhancing metrics and feedback remain (such as those related to measuring the usefulness of BSA reporting, providing feedback to thousands of individual institutions, and the sensitive nature of ongoing law enforcement investigations). FinCEN has an ongoing effort to identify additional measures of the value and usefulness of BSA reporting, which is expected to be completed at the end of 2019. But opportunities exist to enhance feedback and reporting before that date and in general. For example, in the interim FinCEN routinely could communicate currently available metrics on usefulness to help financial institutions more fully understand the importance and value of their efforts to report BSA-related information. Furthermore, with today’s rapidly changing financial markets and potential changes to money laundering risks, it is important that FinCEN and federal agencies take steps to provide institution-specific feedback—while keeping in mind any confidentiality concerns—to cover different types of financial institutions and those with significant financial activity. Increasing the feedback on BSA reporting could help make the BSA reporting of financial institutions more targeted and effective and enhance collaboration among key stakeholders in U.S efforts to combat illicit financial crime. Recommendations for Executive Action We are making the following four recommendations to FinCEN: The Director of FinCEN, after consulting with CFTC, should consider prioritizing the inclusion of the primary SRO conducting BSA examinations in the futures industry in the Bank Secrecy Act Advisory Group (BSAAG) on a more consistent basis and also making the primary futures industry association a concurrent member. (Recommendation 1) The Director of FinCEN, after consulting with CFTC, should take steps to explore providing direct BSA data access to NFA. (Recommendation 2) The Director of FinCEN should review options for FinCEN to more consistently and publicly provide summary data on the usefulness of BSA reporting. This review could either be concurrent with FinCEN’s BSA value study or through another method. (Recommendation 3) The Director of FinCEN should review options for establishing a mechanism through which law enforcement agencies may provide regular and institution-specific feedback on BSA reporting. Options should take into consideration providing such feedback to cover different types of financial institutions and those with significant financial activity. This review could either be part of FinCEN’s BSA value study or through another method. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to Treasury/FinCEN, CFTC, NCUA, DHS, DOJ, the Federal Reserve, FDIC, IRS, OCC, and SEC for their review and comment. FinCEN, CFTC, and NCUA provided written comments, which are reproduced in appendixes IV, V, and VI. FinCEN, DHS, the Federal Reserve, FDIC, OCC, and SEC provided technical comments on the draft report, which we incorporated as appropriate. In emails, DOJ and IRS audit liaisons stated that the agencies did not have any formal or technical comments. In its written response, FinCEN concurred with one recommendation, disagreed with two, and agreed with the spirit of one recommendation but noted some concerns. Specifically, FinCEN concurred with the recommendation that FinCEN more consistently and publicly provide summary data on the usefulness of BSA reporting (Recommendation 3). FinCEN disagreed with the draft report’s recommendation that FinCEN, after consulting with CFTC, should ensure that the primary SRO conducting BSA examinations in the futures industry is a regular member of BSAAG and also should consider making the primary futures industry association a concurrent member (Recommendation 1). FinCEN’s written response stated that while the primary futures SRO presently is a BSAAG member, only federal agencies are considered permanent members, and FinCEN will not make future membership commitments to any specific SRO or any other nonfederal organization. As such, we modified the recommendation to give FinCEN more flexibility to address the issues that prompted our recommendation. We continue to believe that prioritizing futures representation in BSAAG to be consistent with securities industry representation would help FinCEN better understand BSA compliance in the futures industry and keep the futures industry updated on related BSAAG initiatives. As noted in the report, the primary securities SRO has been a member of BSAAG since 2008 and a key securities industry association has been a concurrent member. FinCEN disagreed with the recommendation that FinCEN, after consulting with CFTC, explore providing direct BSA data access to NFA (Recommendation 2) because FinCEN said it has not received a request from CFTC or NFA to engage on this matter. FinCEN also said it would review any future request for direct access in accordance with established procedures, stating it must ensure that proper controls are in place and that direct access to the BSA database is limited to those who truly need it. As discussed in our report, CFTC stated that NFA’s direct access to BSA data would enhance NFA’s ability to scope and perform BSA/AML examinations, and to use BSA data more extensively and more frequently to perform its functions, including conducting the majority of BSA examinations for the futures industry. NFA representatives also told us they welcomed a discussion with CFTC and FinCEN on the benefits and drawbacks of having direct access to BSA data. We continue to believe the recommendation is valid as it provides FinCEN flexibility to explore providing NFA data access and would not preclude FinCEN from ensuring that NFA had proper controls in place. In its written responses, FinCEN neither agreed nor disagreed with the recommendation that FinCEN review options for establishing a mechanism through which law enforcement agencies may provide regular and institution-specific feedback on BSA reporting (Recommendation 4). FinCEN said it agreed with the spirit of this recommendation—that law enforcement feedback on the value and usefulness of BSA information is important—and stated that FinCEN regularly takes necessary steps to review options for establishing additional mechanisms through which law enforcement agencies can provide regular feedback. FinCEN also stated that it provides a consolidated view of law enforcement feedback as well as feedback on the value and usefulness of institution-specific BSA information. However, as discussed in the report, we found that the current institution-specific feedback mechanisms were not occurring on a regular basis or were on a relatively small scale. In its response, FinCEN also noted that unless mandated by Congress, law enforcement feedback will be voluntary and that FinCEN cannot compel law enforcement compliance with feedback initiatives. We continue to believe the recommendation is valid as it allows FinCEN flexibility in reviewing options for establishing a mechanism through which law enforcement may choose to provide regular feedback to reach a larger number of financial institutions from diverse industries, without requiring FinCEN to compel law enforcement agencies to participate. In its written responses, CFTC agreed with all our recommendations. In particular, CFTC agreed that the primary futures SRO should be a regular member of BSAAG (Recommendation 1). CFTC added that FinCEN should consider making another futures SRO a concurrent member. In a later discussion, a CFTC Assistant General Counsel said that, in general, CFTC would like to see more futures participation in BSAAG, including SROs and industry associations. CFTC also agreed with our recommendation that the Director of FinCEN, after consulting with CFTC, explore providing NFA direct access to BSA data (Recommendation 2). In its written response, NCUA also agreed with all of our recommendations, which it stated would enhance coordination and collaboration and increase visibility about the value of BSA reporting requirements. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Attorney General, the Acting Secretary of Homeland Security, the Commissioner of IRS, the Chairman of CFTC, the Chairman of FDIC, the Chairman of the Federal Reserve, the Chairman of NCUA, the Comptroller of the Currency, the Chairman of SEC, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or ClementsM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to:(1) describe how the Financial Crimes Enforcement Network (FinCEN) and supervisory agencies supervise, examine for, and enforce Bank Secrecy Act and related anti- money laundering requirements (collectively, BSA/AML) compliance; (2) discuss how FinCEN, supervisory agencies, and law enforcement collaborated on implementing and enforcing BSA/AML requirements; and (3) examine the extent to which FinCEN, supervisory agencies, and law enforcement established metrics and provided feedback to financial institutions on the usefulness of their BSA reporting. For this report, we identified the key agencies and entities, including FinCEN, a bureau in the Department of the Treasury (Treasury), which is responsible for the administration of BSA, and the supervisory agencies that oversee BSA compliance. The supervisory agencies include the federal banking regulators—Federal Deposit Insurance Corporation (FDIC), Board of Governors of the Federal Reserve System (Federal Reserve), National Credit Union Administration (NCUA), Office of the Comptroller of the Currency (OCC)—as well as the Internal Revenue Service (IRS), Commodity Futures Trading Commission (CFTC), and Securities and Exchange Commission (SEC). Self-regulatory organizations (SRO) for the securities and futures industries—including the Financial Industry Regulatory Authority (FINRA) and National Futures Association (NFA)—also have BSA/AML responsibilities and conduct BSA examinations of their members. The Department of Justice may pursue investigations and prosecutions of financial institutions and individuals for both civil and criminal violations of BSA/AML regulations. To address the first objective, we reviewed relevant laws—including the Bank Secrecy Act, its related statutes, and key provisions of the USA PATRIOT Act—regulations, and agency documentation. To better understand how supervisory agencies conduct their examinations, we reviewed the following BSA/AML examination manuals: the 2014 BSA/AML Examination Manual, developed by the Federal Financial Institutions Examination Council (FFIEC); the Bank Secrecy Act/Anti- Money Laundering Examination Manual for Money Services Business (developed by FinCEN and IRS); and SEC’s nonpublic manual and futures SROs nonpublic examination procedures. We reviewed and analyzed data from FinCEN summary reports on the examination and enforcement activities of supervisory agencies for fiscal years 2015 through 2018 (second quarter), which were the most recent data available at the time of our analysis. We also reviewed FinCEN’s enforcement actions for this time period as provided on its website, to identify the number and types of financial institutions, and the number of concurrent actions FinCEN brought jointly with a regulator. We also reviewed and analyzed FinCEN referral data from January 1, 2015, to September 25, 2018. Referrals are potential BSA violations or deficiencies referred by supervisory agencies, the Department of Justice, or state regulators. We assessed the reliability of the FinCEN summary report data and referral data by reviewing documentation related to these datasets, interviewing knowledgeable officials, and conducting manual data testing for missing data, outliers, and obvious errors. We determined the data to be sufficiently reliable for reporting on supervisory agency, SRO, and FinCEN BSA/AML compliance and enforcement activities. For this and our other objectives, we interviewed officials at Treasury’s Office of Terrorism and Financial Intelligence and FinCEN, the other supervisory agencies, and two SROs—FINRA and NFA. To address the second objective, we judgmentally selected six law enforcement agencies based on their (1) focus on financial crimes, (2) role in investigating or prosecuting recent large criminal cases we selected involving financial institutions with BSA violations, (3) participation in FinCEN’s liaison program, and (4) identification by FinCEN as a key user of BSA data. We selected the following law enforcement agencies: the Criminal Division (Money Laundering and Asset Recovery Section), the U.S. Attorney’s Offices (through the Executive Office for United States Attorneys), and the Federal Bureau of Investigation in the Department of Justice; IRS Criminal Investigation in the Department of Treasury; and U.S. Immigration and Customs Enforcement-Homeland Security Investigations and the U.S. Secret Service in the Department of Homeland Security. The views of selected law enforcement agencies are not generalizable. To identify key collaborative mechanisms used to implement BSA/AML responsibilities, we reviewed agency documentation (such as strategic plans, national strategies, and risk assessments) and prior GAO reports that contained discussions of collaborative mechanisms, and we interviewed agency officials from FinCEN, supervisory agencies, SROs, and selected law enforcement agencies. We obtained agency documentation and data related to the identified collaboration mechanisms and interviewed officials from FinCEN, supervisory agencies, and selected law enforcement agencies for their perspectives on these efforts. We compared agencies’ collaboration efforts to criteria in federal internal control standards on management communication. To gain further insight into the collaboration process, we also reviewed documentation on three criminal cases involving BSA/AML violations by financial institutions to illustrate how law enforcement investigates and prosecutes BSA violations and coordinates with FinCEN and other supervisory agencies. We selected the cases on the basis of recent occurrence (calendar year 2017 or 2018) and on their having involved criminal violations of BSA by financial institutions, required coordination on penalties among multiple supervisory agencies and law enforcement, and resulted in a large monetary penalty. While not generalizable, the cases helped provide additional context for our review. To obtain additional perspectives on the effectiveness of BSA/AML collaboration processes, we interviewed representatives of seven selected industry associations based on their published work and relevant experience and for coverage of key financial industries (banking, securities, futures, and the money services business). While not generalizable, these interviews helped provide context for how industry views the effectiveness of BSA/AML collaboration efforts. For the third objective, we reviewed agency documentation and data on metrics related to BSA reporting and feedback mechanisms that FinCEN, the supervisory agencies, or the six selected law enforcement agencies had established. Key documents we reviewed included Treasury’s most recent strategic plan, national strategy for combating illicit financing, and related risk assessments. For all agencies we interviewed, we requested any available metrics. We reviewed agency websites, annual reports, and recently published speeches and testimonies on BSA/AML-related topics to identify any metrics. We also requested and reviewed contract documentation from FinCEN, such as the performance work statement for a study that FinCEN commissioned on how to establish metrics for and identify the value of BSA data. We compared metrics on the usefulness of BSA and how they were communicated against key criteria for enhancing or facilitating the use of performance metrics that GAO previously identified and federal internal control standards on management communication. For feedback mechanisms, we obtained documentation on any steps FinCEN, supervisory agencies, or the selected law enforcement agencies took to provide feedback on BSA reporting to financial institutions and we interviewed agency representatives on these efforts. The documents we reviewed included those identified above related to metrics, as well as agency advisories, guidance, and rulemaking. We compared the feedback efforts against Treasury’s information-sharing statutory duties and strategic plan, and international anti-money laundering standards and guidance. To gain industry perspectives on the usefulness of BSA reporting and on feedback received from FinCEN, supervisory agencies, and law enforcement, we conducted seven interviews with the selected industry associations. While not generalizable, the interviews helped provide context for financial industry perspectives on BSA/AML reporting and feedback. We conducted this performance audit from February 2018 to August 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Bank Secrecy Act/Anti-Money Laundering Violation, Examination, and Enforcement Action Data As part of its oversight of supervisory agencies, the Financial Crimes Enforcement Network (FinCEN) routinely collects data from supervisory agencies as established in information-sharing memorandums of understanding (MOU). The MOUs establish that supervisory agencies should provide FinCEN with examination data such as the number of Bank Secrecy Act /anti-money laundering (BSA/AML) violations, informal actions, and formal enforcement actions (on a quarterly basis). Finally, the Internal Revenue Service (IRS) told us it has MOUs with some state regulators to obtain state examinations, which IRS officials said help to identify issues among and plan examinations of money services businesses and determine if the businesses had addressed prior deficiencies. The following sections provide more information on each supervisory agency’s (1) examinations, (2) violations, and (3) enforcement actions. Also see appendix I for more information on the types of data we collected for each agency and any data limitations. Banking Regulators From fiscal year 2015 to the second quarter of fiscal year 2018, the most common BSA violations cited by the federal banking regulators were violations of requirements to report suspicious activities, 314(a) information-sharing requirements, rules for filing of reports, BSA training, and a system of internal controls. For example, regulators could cite a violation if a financial institution failed to file a required suspicious activity report (SAR), failed to file a SAR in a timely manner, or failed to maintain confidentiality of SARs. Violations of internal controls include a financial institution failing to establish a system of internal controls to ensure ongoing compliance, including staff adherence to the financial institution’s BSA/AML policies. From fiscal year 2015 to 2018 (second quarter), the federal banking regulators cited thousands of violations (11,752) and brought 116 formal enforcement actions (see table 7). The number of informal enforcement actions compared to the number of formal enforcement actions varied by banking regulator. For example, in fiscal year 2017 the National Credit Union Administration (NCUA) brought 1,077 informal enforcement actions and no formal enforcement actions. In the same period, the Office of the Comptroller of the Currency (OCC) brought two informal enforcement actions and six formal enforcement actions. SEC and its SROs took 71 formal enforcement actions against broker- dealers from fiscal year 2015 through the second quarter of fiscal year 2018 (see table 8). FINRA took the majority of enforcement actions against broker-dealers. From fiscal year 2015 to the second quarter of fiscal year 2018, SEC and the SROs for broker-dealers most frequently cited violations of FINRA AML program rules. They included violations of policies and procedures relating to reporting suspicious activity, internal controls, and annual independent testing, as well as BSA violations of AML program requirements for brokers or dealers and customer identification programs for brokers or dealers. From fiscal year 2015 to the second quarter of fiscal year 2018, the National Futures Association (NFA) cited all BSA/AML violations, and took all informal and formal enforcement actions for BSA/AML deficiencies for the futures industry (see table 9). The violations NFA most commonly cited were against introducing brokers and fell under its AML program rules that related to policies and procedures for internal controls, training, and annual independent testing, and BSA requirements for AML programs and customer identification programs. The CME Group did not cite any futures commission merchants for violations during this period. In response to violations, NFA brought almost 200 informal enforcement actions and a few (10) formal enforcement actions over the period of our review. For example, in 2017 NFA took 64 informal and four formal enforcement actions. IRS referred more than 100 cases to FinCEN from fiscal year 2015 through the second quarter of 2018 and issued letter 1112s to thousands of institutions, which contain a summary of examination findings and recommendations to the institution for corrective action (see table 10). From fiscal year 2015 to the second quarter of fiscal year 2018, the most common violations cited by IRS fell under general AML program requirements for money services businesses, which require such businesses to develop, implement, and maintain an effective AML program (one designed to prevent a business from being used to facilitate money laundering and the financing of terrorist activities). AML program requirements have several subcomponent violations. Among the most commonly cited subcomponent violations were those related to overall program deficiencies; policies, procedures, and internal controls; training of appropriate personnel to identify suspicious transactions; and providing for independent testing of the AML program. Appendix III: Selected Criminal Cases Involving Bank Secrecy Act/Anti-Money Laundering Violations by Financial Institutions The Financial Crimes Enforcement Network (FinCEN) and supervisory agencies may be asked to provide information as part of law enforcement investigations and can take parallel, but separate, enforcement actions against the same institutions to address Bank Secrecy Act/anti-money laundering (BSA/AML) concerns. FinCEN and supervisory agencies may refer potential violations of a criminal nature an appropriate federal law enforcement agency or to the Department of Justice (DOJ)—and within DOJ, the U.S. Attorney’s Office—and may be asked to assist law enforcement investigations. For example, supervisory agencies may be asked to interpret financial institution documents or serve as expert witnesses and records custodians in a trial. FinCEN, supervisory agencies, and law enforcement agencies have conducted parallel civil and criminal investigations. Federal law enforcement and supervisory agency officials have told us that such investigations should remain separate and independent. We selected three recent cases in which FinCEN, supervisory agencies, and law enforcement collaborated to conduct parallel investigations and took concurrent but separate civil and criminal BSA enforcement actions. Officials with whom we spoke from agencies that were involved in these cases said the agencies coordinated with each other (for example, by establishing liaison positions, scheduling regular conference calls, and coordinating on global settlements). Rabobank National Association (Rabobank). On February 7, 2018, DOJ and the Office of the Comptroller of the Currency (OCC) both announced actions against Rabobank for deficiencies in its BSA/AML compliance program and obstruction of the primary regulator (OCC). DOJ announced that Rabobank pleaded guilty to a felony conspiracy charge for impairing, impeding, and obstructing its primary regulator OCC by concealing deficiencies in its AML program and for obstructing OCC’s examination of Rabobank. The bank agreed to forfeit $368,701,259 for allowing illicit funds to be processed through the bank without adequate BSA/AML review and OCC issued a $50 million civil money penalty against Rabobank for deficiencies in its BSA/AML compliance program. DOJ’s Money Laundering and Asset Recovery Section Bank Integrity Unit, the U.S. Attorney’s Office of the Southern District of California, U.S. Immigration and Customs Enforcement-Homeland Security Investigations (ICE-HSI) within the Department of Homeland Security, Internal Revenue Service Criminal Investigation (IRS-CI), and the Financial Investigations and Border Crimes Task Force conducted the criminal investigation. The investigation occurred in parallel with OCC’s regulatory investigation and the investigation by FinCEN’s Enforcement Division. OCC officials told us they collaborated extensively with other agencies over a 4-year period, participated in numerous calls and meetings, and provided law enforcement with examination information and access to OCC examiners for interviews. Officials from the U.S. Attorney’s Office of the Southern District of California said that a practice they found helpful in this case was establishing a liaison with the agencies involved. The liaisons allowed the different parties to share information effectively, provided access to data as needed, and responded to questions in a timely manner. U.S. Bancorp. On February 15, 2018, DOJ, OCC, and FinCEN announced actions against U.S Bancorp and its subsidiary U.S. Bank, N.A., for violations of several provisions of BSA, including an inadequate BSA/AML program and failure to file suspicious activity reports (SAR) and currency transaction reports (CTR). Under a deferred prosecution agreement with the U.S. Attorney’s Office of the Southern District of New York, U.S Bancorp and its subsidiary agreed to pay $528 million for BSA violations and agreed to continue to reform its AML program. Of the $528 million, $75 million was satisfied by a penalty paid to the Department of the Treasury as part of OCC’s civil money penalty assessment, which cited the bank in a 2015 consent order for failure to adopt and implement a program that covered required BSA/AML program elements. FinCEN also reached an agreement with U.S. Bank to resolve related regulatory actions, which required U.S. Bank to pay an additional $70 million for civil violations of the BSA. On the same day as the FinCEN agreement, the Board of Governors of the Federal Reserve System (Federal Reserve) imposed a $15 million penalty against U.S. Bancorp for deficiencies (including BSA violations) related to the bank under its supervision. According to officials from the U.S. Attorney’s Office of the Southern District of New York, their office, OCC, FinCEN and the Federal Reserve coordinated the terms of their respective resolutions to avoid the unnecessary imposition of duplicative penalties. OCC officials told us that the U.S. Attorney’s Office of the Southern District of New York contacted them to obtain additional information about its examination conclusions that supported OCC’s 2015 cease and desist order. OCC provided examination documents and information to the U.S. Attorney’s Office of the Southern District of New York for 2 years, including making OCC examiners available for interviews with the U.S. Attorney’s Office personnel and to answer follow-up inquiries. Federal Reserve officials said they coordinated in the U.S. Bancorp case through a global resolution with the firm. Banamex. In May 2017, Banamex admitted to criminal violations and entered into a non-prosecution agreement, which included an agreement to forfeit $97.44 million. The bank also admitted that it should have improved its monitoring of money services businesses’ remittances, but failed to do so. The investigation was conducted by the Bank Integrity Unit of DOJ’s Money Laundering and Asset Recovery Section, U.S. Attorney’s Office of the District of Massachusetts, IRS-CI, Drug Enforcement Administration, and the Federal Deposit Insurance Corporation’s (FDIC) Office of Inspector General. The agencies consulted on a general level, but the agencies’ investigations were at all times kept separate from the criminal investigation. In July 2015, FDIC and the California Department of Business Oversight assessed civil money penalties against Banamex requiring a total payment of $140 million to resolve separate BSA regulatory investigations. In February 2017, FDIC also announced enforcement actions against four former senior bank executives relating to BSA violations. IRS-CI officials stated that involvement by the Bank Integrity Unit of DOJ’s Money Laundering and Asset Recovery Section in financial institution investigations is extremely helpful as the unit bring a wealth of knowledge and resources. DOJ officials told us there was close collaboration between all agencies involved. DOJ officials said that all agencies had meetings frequently and created a liaison position to encourage interagency collaboration as the case progressed. In May 2018, DOJ issued a new policy to encourage coordination among DOJ and supervisory agencies during corporate investigations. In a May 2018 speech, the DOJ Deputy Attorney General identified the Securities and Exchange Commission, Commodity Futures Trading Commission, Federal Reserve, FDIC, OCC, and the Department of the Treasury’s Office of Foreign Assets Control as agencies with which DOJ works to be better able to detect sophisticated financial fraud schemes and deploy adequate penalties and remedies to ensure market integrity. He noted that many federal, state, local, and foreign authorities that work with DOJ were interested in further coordination with DOJ. DOJ’s new policy encourages coordination and consideration of the amount of fines, penalties, or forfeiture paid among DOJ components and other law enforcement or other federal, state, local, or foreign enforcement authorities seeking to resolve a case with a company for the same misconduct. Similarly, in June 2018, the Federal Reserve, FDIC, and OCC issued a joint statement on coordination among federal banking agencies during formal enforcement actions. Appendix IV: Comments from the Financial Crimes Enforcement Network Appendix V: Comments from the Commodity Futures Trading Commission Appendix VI: Comments from the National Credit Union Administration Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Allison Abrams (Assistant Director), Verginie Tarpinian (Analyst in Charge), Peter Beck, Joseph Cruz, Brian James, Moira Lenox, Benjamin Licht, Robert Lowthian, Marc Molino, Ifunanya Nwokedi, Barbara Roesmann, Tyler Spunaugle, Farrah Stone, and Sarah Veale made key contributions to this report.
Why GAO Did This Study Illicit finance activity, such as terrorist financing and money laundering, can pose threats to national security and the integrity of the U.S. financial system. FinCEN is responsible for administering BSA and has delegated examination responsibility to supervisory agencies. FinCEN also is to collect and disseminate BSA data. BSA requires that financial institutions submit reports, which may be used to assist law enforcement investigations. Industry perspectives on BSA reporting have included questions about its usefulness. This report examines, among other objectives, how FinCEN and supervisory and law enforcement agencies (1) collaborate and (2) provide metrics and feedback on the usefulness of BSA reporting. GAO reviewed related laws and regulations; agency documentation; examination and enforcement action data; and interviewed FinCEN, supervisory agencies, and a nongeneralizable selection of six law enforcement agencies and seven industry associations. What GAO Found The Financial Crimes Enforcement Network (FinCEN)—within the Department of Treasury—supervisory agencies (such as banking, securities, and futures regulators), and law enforcement agencies collaborate on implementing Bank Secrecy Act/anti-money laundering (BSA/AML) regulations, primarily through cross-agency working groups, data-sharing agreements, and liaison positions. FinCEN and law enforcement agencies provided some metrics and institution-specific feedback on the usefulness of BSA reporting (such as suspicious activity reports) to the financial industry but not regularly or broadly. FinCEN and some agencies have metrics on the usefulness of BSA reports. One law enforcement agency annually publishes aggregate metrics on BSA reports that led to investigations and indictments. But FinCEN did not consistently communicate available metrics; it generally did so on an ad-hoc basis such as through published speeches. In 2019, FinCEN began a study to identify measures on the value and usefulness of BSA reporting—to be completed by the end of 2019. By consistently communicating currently available metrics (summary data), and any later identified by the study, FinCEN may assist financial institutions in more fully understanding the importance of their efforts. Industry associations GAO interviewed noted financial institutions would like to receive more institution-specific feedback on the usefulness of their BSA reporting; they also identified suspicious activity reports as labor-intensive. In 2017, FinCEN began providing such feedback and some law enforcement agencies have ongoing efforts to provide institution-specific briefings. But these efforts have not been regularly made and involved relatively few institutions. Additional and more regular feedback, designed to cover different types of financial institutions and those with significant financial activity, may enhance the ability of the U.S. financial industry to effectively target efforts to identify suspicious activity and provide quality BSA reporting. What GAO Recommends GAO makes four recommendations, including that FinCEN review options to consistently communicate summary data and regularly provide institution-specific feedback on its BSA reporting. FinCEN concurred with the recommendation on summary data and agreed with the spirit of the recommendation on feedback. FinCEN raised concerns with the need for the two other recommendations. GAO continues to believe the recommendations have merit, as discussed in the report.
gao_GAO-19-405
gao_GAO-19-405_0
Background As the landlord for the federal government, GSA acquires space on behalf of federal agencies through new construction and leasing. In this capacity, GSA leases space in 8,681 buildings or other assets and maintains a total inventory of more than 370 million square feet of workspace for 1.1 million federal employees, plus support contractors. Furthermore, GSA is authorized by law to enter into lease agreements for up to 20 years and is permitted to obligate funds for its multiyear leases one year at a time. GSA can delegate its leasing authority to agencies if GSA determines it is in the government’s best interest. Agencies may request this delegation of authority when they believe they can obtain the lease more efficiently than GSA. GSA grants three types of delegations of leasing authority, depending on the intended use of the leased space: General purpose – types of space that might be needed by almost any agency, such as office or warehouse space; Categorical – specific types of space that might be needed by some agencies, such as for antennas, depots, or docks; and Special purpose – types of space designated for 13 specified agencies, such as laboratories for the Department of Health and Human Services or office space in or near stockyards for USDA. GSA’s FMR Bulletin C-2, issued in 2014, (the 2014 Bulletin) provides usage and reporting requirements for delegations of leasing authority. Many of these requirements restate or elaborate on various requirements in statute and regulation. All delegations of leasing authority, including general purpose, categorical, and special purpose space delegations, are covered by the 2014 Bulletin. Agencies are responsible for compliance with all applicable requirements when using delegated leasing authority. Agencies must also conform with the requirements of any delegation approval from GSA. The requirements can include limits on square footage or the length of the lease. Although GSA delegates its leasing authority to other agencies, it acts as a guarantor for the leases in the event of a default by an agency. GSA officials said that there have not been any defaults to date. The process to apply for delegated leasing authority and then obtain a delegated lease is outlined in figure 1 below. GSA Has Reformed its Delegated Leasing Program, but Data Issues Remain GSA Has Made Efforts to Reform its Delegated Leasing Program In 2007, GAO found that GSA’s delegated leasing program documentation was incomplete, inconsistent, unclear, and outdated. Specifically, we found that GSA’s lease delegation process lacked certain management controls, such as current written policies and procedures. In addition, the GSA OIG found that some delegated leases had excessive rental rates and inadequately documented lease files, primarily due to customer agencies’ lack of expertise. Further, 56 percent of the lease files reviewed by the OIG contained insufficient documentation to support that the federal government received a fair and reasonable price. In response to problems identified in GAO and GSA reviews, GSA reformed its lease delegation program by clarifying requirements, documenting policies and procedures, and centralizing data management. In 2007, GSA issued new requirements for the delegated leasing program in the FMR Bulletin 2008-B1 (2008 Bulletin). For example, the 2008 Bulletin instructed GSA and the agencies on the proper submission of documents to GSA; and required agencies to have an organizational structure in place to support the delegation of authority, and to ensure compliance with all applicable laws, regulations, and GSA directives governing the lease acquisition. In 2014, GSA began using a new electronic system—G-REX—to review and process applications for delegations of leasing authority. Requesting agencies began electronically submitting pre-authorization and post award documents to G-REX. In 2014, GSA re-emphasized and updated the requirements applicable to GSA leasing delegations in its 2014 Bulletin, which continued to be in effect when this report was issued. GSA Continues to Address Data Quality Issues GSA continues to address data quality issues that persist in spite of its reform efforts. These data quality issues affect GSA’s ability to monitor its delegated leasing program. First, we found that when information is compiled, the G-REX system overstates the total delegated lease contract values by 12 times higher than they actually were for every delegated lease in the G-REX system. This occurred because it multiplied annual rents by the number of months of the lease, instead of by the number of years. For example, for a lease with an annual rent of $2,300,000 and a lease term of 48 months, the calculated total contract value was $110,400,000 instead of the $9,200,000 total contract value it should have had for the 4 year lease. GSA officials confirmed this error and corrected it during the course of our review. Second, we also found data errors in G-REX resulting in approved delegated leasing projects with annual rental rates higher than they actually were. For example, we found a data entry within G-REX for an approved delegated lease with a total lease rental rate several times higher than the average annual rent rate. After reviewing the lease file, GSA officials confirmed that the rental rate was incorrectly entered by the user into G-REX. We also found two G-REX data entries for approved delegated leasing projects with 25 year lease terms. General purpose delegated leases can only be for terms of up to 20 years. GSA officials confirmed that both identified leases were within the authorized delegated leasing parameters but that the data entries were inaccurate due to a system error within G-REX that incorrectly calculated the renewal options. GSA officials said that they are aware of some data quality issues with the G-REX system and are working to address them in an updated version, which they plan to launch later in 2019. Officials said that the new version of G-REX will include more business rules to prevent missing data and identify anomalies. Further, uploading required post award documents is not currently a mandatory action in G-REX. Instead, G-REX sends automatic reminder emails to agencies if these documents have not been uploaded. To address this issue, GSA officials said that the new version of G-REX would improve the post award document upload process. As we discuss later in this report, we found that selected agencies did not always submit all required post award documents. GSA Has Not Annually Reconciled G-REX and FRPP Data While GSA is taking steps to improve the G-REX system, it does not reconcile FRPP and G-REX data. Specifically, the 2014 Bulletin states that GSA will perform an annual reconciliation of data between FRPP and G-REX. GSA officials described the annual reconciliation as an oversight procedure that would help ensure that GSA has an accurate listing of delegated leases by comparing FRPP data with the centralized records on delegated leases (currently stored in G-REX). According to GSA officials, they tried to fully reconcile the two databases in 2014 but were unable to do so. GSA officials stated that while they could identify certain specific discrepancies between FRPP and G-REX, conducting a full reconciliation of the two databases has many degrees of complexity. Specifically, G-REX does not include all delegated leases, in part, because not all existing delegated leases migrated into G-REX from the prior GSA leasing system. In addition, GSA officials said FRPP and G-REX do not directly match because each database serves different purposes. Specifically, FRPP is a single comprehensive database that contains information on federal real property worldwide, updated annually. In contrast, G-REX is considered a business process management software application and is primarily used by GSA to process and capture lease delegation applications, according to GSA officials. GSA officials now report that, even though the 2014 Bulletin still calls for the annual reconciliation of data in G-REX and FRPP, they believe fully reconciling the two datasets would have little, if any, value, and currently have no intentions to do so. The Standards for Internal Control in the Federal Government state that improving the reliability of data could help agencies better manage programs. For example, in this case, agencies could utilize real property data to measure performance and inform decision-making to ultimately improve the cost effectiveness and efficiency of their real property portfolio. Moreover, although FRPP data quality could be improved, FRPP can still provide reliable background information on GSA’s federal real property portfolio. Since agencies are required to report data to FRPP on all leased assets acquired under a delegation from GSA, FRPP may provide GSA with useful information on an agency’s delegated leases, in addition to what is included in G-REX. We recognize the challenges posed by attempting to fully reconcile G- REX and FRPP. However, the 2014 Bulletin does not explicitly state GSA will perform a full reconciliation. GSA could partially reconcile G-REX and FRPP by doing some cross-data comparison. For example, had GSA cross-verified G-REX and FRPP data, even on a case-by-case basis, it could have potentially caught and addressed the data quality issues we found in G-REX earlier. Some comparison of G-REX with the relevant data in FRPP could improve the reliability, and thereby the usefulness, of both data sets. For example, GSA officials said that GSA could, in theory, begin comparing leases reported in FRPP as being awarded with delegated authority against G-REX’s record of delegated leases. A partial reconciliation like this could identify leases possibly acquired without delegated leasing authority or other data quality issues and GSA could then take steps to increase the reliability of the G-REX data. Until GSA clarifies its position on what efforts it will take to reconcile G-REX and FRPP, GSA is potentially losing opportunities to enhance its oversight and is operating at odds with its own procedures. GSA Does Not Know if Agencies Have the Policies and Procedures to Appropriately Manage Their Delegated Leasing Activities GSA Does Not Regularly Assess Whether Agencies Have Policies and Procedures to Effectively Manage Delegated Leasing Activities We found that GSA has not designed control activities that would allow it to regularly determine the adequacy of requesting agencies’ policies and procedures to manage their delegated leasing activities. Instead, GSA officials said that they expect agencies to have the capacity to manage their delegated leases until evidence suggests otherwise and said GSA assesses agencies’ activities on an ad hoc basis. For example, GSA officials said that GSA audited USDA and Bureau of Indian Affairs (BIA) because of tips from outside sources. Agencies requesting a delegation of leasing authority must submit, among other things, an organizational structure and staffing plan to support the delegation that identifies trained and experienced staff to support delegated leasing activities. In our review, we found that not all selected agencies had sufficient policies and procedures to manage their own delegated leases. For example, GSA’s ad hoc review of USDA’s delegated leases found significant oversight issues. Specifically, GSA found that USDA had awarded seven leases without a delegation of authority. In addition, USDA was unable to locate the executed lease for one of the delegated leases we reviewed. USDA officials said the agency has learned from experiences like this one and is currently developing better policies and procedures to prevent this from happening again. For example, USDA has centralized leasing oversight between two bureaus and plans to annually review selected delegated leases. Moreover, GSA’s ad hoc review of BIA’s delegated leases found that BIA had also leased property without delegated authority. Further, GSA’s 2012 audit of post award documents found that BIA had some delegated leases that had expired, and some exceeded the space threshold of 19,999 square feet. As a result of its review, GSA did not grant BIA any new delegated leasing authority until its OIG completed its findings and BIA responded with a corrections plan that corrected these deficiencies, according to GSA. GSA’s 2014 Bulletin states that GSA will review the adequacy of the requesting agency’s organizational structure and staffing proposed for the delegation; and whether the requesting agency has complied with all applicable laws, executive orders, regulations, OMB Circulars, and reporting requirements under previously authorized delegated leases. Further, according to federal standards for internal control, management should design control activities to achieve objectives and respond to risks. Control activities are the actions management establishes through policies and procedures to achieve objectives and respond to risks in the internal control system. Accordingly, agencies with delegated leasing authority should have an appropriate organizational structure and effective policies and procedures to support the delegation and to ensure compliance with applicable laws and other requirements, both of which help agencies manage their delegated leasing activities. If GSA had designed control activities to regularly review each agency’s policies and procedures for managing its delegated leases, GSA officials could have known earlier that an agency lacked the ability to manage its delegated leases and possibly delayed granting additional delegations of leasing authority until the agency had demonstrated their ability to manage its delegated leasing activities. GSA officials said assessing an agency’s policies and procedures to manage delegated leasing activities when reviewing the agency’s individual application for a delegation of leasing authority is not practical. GSA officials noted that it would become a repetitive and unproductive process to review an agency’s policies and procedures each time they applied for delegated leasing authority as the same agencies are requesting delegated leasing authority for many leases and an agency’s policies and procedures would not change with each new application. However, GSA could assess agencies’ policies and procedures for managing delegated leasing activities at regular intervals, such as annually or biennially. Because GSA is not following its own procedures set out in the 2014 Bulletin, or designing control activities that would allow it to assess, at regular intervals, agencies’ ability to manage their own delegated leasing activities, GSA cannot ensure that it is providing this authority to agencies that can manage it effectively. GSA Does Not Track Agencies’ Performance in Meeting GSA Management Goals GSA does not track agencies’ performance toward meeting GSA’s management goals, which is inconsistent with the 2014 Bulletin and GSA policy. GSA has three key management goals for tracking the success of the delegated leasing program: 1. Delegated leases should have lease rates that are at or below private sector rates over half the time, according to GSA’s annual performance plan. The 2014 Bulletin states that, prior to granting the agency’s request for a leasing delegation, GSA will consider the demonstrated ability of the requesting agency to meet or exceed this published performance measure for the cost of leased space, among other things. 2. Delegated leases should not extend into holdover status. The 2014 Bulletin states that a lease in holdover status, or an agency occupying a building or space with no lease because it has expired, is in violation of the lease delegation authority. 3. Delegated leases should not be extended unless necessary to avoid a holdover. GSA’s leasing desk guide states that short-term lease extensions should only be used as a last resort because they typically cost more, among other reasons. The post award documents that agencies submit into G-REX do not allow GSA to track agencies’ performance in meeting these management goals. For example, G-REX does not calculate when lease rates are at or below private sector rates. GSA officials said that GSA does not track the performance of agencies with delegated leasing authority against these three management goals because it is primarily the agencies’ responsibility to ensure they meet them. However, the four agencies with delegated leases that we reviewed did not always meet GSA’s three goals. Officials from two of the agencies we interviewed said that they were unaware of GSA’s performance cost metric for negotiating lease rates at or below private sector rates or that it applied to delegated leases. Consequently, the agency officials did not know if they met it. Since neither G-REX nor the agencies with delegated authority track lease rates in this way, GSA does not know if agencies are meeting GSA’s performance cost metric or, more simply stated, if agencies are negotiating cost-effective lease rates. Regarding holdovers, we found all four agencies in our review were experiencing holdovers, which raises questions about how effective their policies are to prevent them. For example, USDA does not use its lease expiration data in an effective manner to track expiring leases to submit lease delegation applications, according to GSA’s audit of USDA delegated leases. Consequently, USDA had one quarter (1,100 of 4,000) of its delegated leases in holdover status in the past 24 months, according to the GSA report. Furthermore, according to our analysis of agency data, all four selected agencies have expired delegated leases where the agency either has a standstill agreement with the landlord, or is simply in holdover status. For example, VA had approximately 10 percent of its delegated leases in holdover status in fiscal year 2018. Regarding extensions, according to G-REX data, almost half of all approved delegated lease authority requests from fiscal year 2016 to fiscal year 2018 were for lease extensions, which goes against GSA’s goals. Officials from three of the four agencies in our review said that they use extensions because they need more time to develop the agency’s space need requirements for a new delegated lease, and they might not have the time to do so before the current delegated lease’s expiration date. GSA staff stated that if an agency has a large number of extensions or holdovers, it denotes that the agency may not be monitoring its leases and as a result is not fully aware of expiring delegated leases. Tenant agencies agree that lease extensions are often not in the best financial interest of the federal government because they are not open to competition, according to this previous work. For example, the USDA’s delegated lease site in Coquille, Oregon was extended without competition for 45 years. USDA officials agreed this was not in the best financial interest of the federal government and that delegated leases should be opened for competition after 20 years. Lease extensions and expired leases in holdover or standstill status are inefficient and costly for the federal government for two reasons. First, without competition among landlords, an agency may be unable to meet the goal of negotiating a lease rate at or below the private sector rate. Second, we have previously reported that the short-term nature of holdovers and standstill agreements creates uncertainties, which can make it challenging for agencies to plan and budget for space needs and difficult for lessors to secure financing. Moreover, we have reported that holdovers can create an adversarial relationship with building owners, prompt concerns about an agency’s portfolio management, and create unnecessary uncertainty for relevant stakeholders. We also noted that holdovers and standstills occur for a variety of reasons, including challenges finalizing space requirements, tenant agency labor shortages, and the sometimes lengthy duration of the leasing process. Absent procedures to regularly track the performance of agencies with delegated leasing authority to ensure cost effectiveness and limit the use of extensions, holdovers, and standstill agreements, GSA cannot ensure that these agencies are meeting the management goals of the delegated leasing program. When previously reviewing GSA’s management of its own portfolio, we found that tracking and monitoring several measures over the life cycle of the lease acquisition process may help reduce the overall number of holdovers and extensions. For example, using a tracking tool to alert management of delegated leases approaching their expiration date could help to reduce the reliance on extensions and to prevent holdovers and standstill agreements. Regularly tracking agencies’ ability to meet key management goals would alert GSA to holdovers and heavy use of extensions that are not cost effective and may warrant additional oversight. GSA Cannot Ensure That Individual Delegated Leases Met Requirements GSA requires that agencies submit an acquisition plan for their lease when requesting delegated leasing authority, but GSA does not systematically ensure that the subsequently executed leases follow those plans and meet program requirements. Agencies submit an acquisition plan along with other documents in order to request delegated leasing authority. GSA officials told us that they review requests for delegated leasing authority by verifying that all required information and documents are uploaded into G-REX and that a lease consistent with the acquisition plan would meet program requirements. GSA officials noted, however, that the acquisition plan is strictly a planning tool and that the terms and conditions are subject to change when finalizing the lease. When approving a request for delegated leasing authority, GSA issues an executive summary and approval letter to the agency identifying the parameters of the leasing authority delegated, such as space limits. Once the agency with delegated leasing authority awards the lease, the agency is required to upload to G-REX certain post award documentation, including the executed lease, within 30 days. These documents provide insight on final lease terms such as square footage, lease expiration date and cost, which may differ from the acquisition plans agencies submitted when applying for delegated leasing authority. We have previously identified risk-based assessment and mitigation as leading practices for providing assurances to managers that they are complying with existing legislation, regulations, and standards and effectively preventing, detecting, and responding to potential fraud, waste, and abuse. Assessing a selection of delegated leases’ post award documents could serve as an early warning system for managers to help mitigate or promptly resolve issues through corrective actions and ensure compliance with existing legislation, regulations, and standards. However, GSA officials said that they do not have a process in place to systematically review post award documents from delegated leases to determine whether the lease awarded met program requirements and were within the authority granted in the approval letter. We found that as of November 2018, GSA had reviewed approximately one percent of the post award documents agencies submitted into G-REX, according to G- REX data. GSA officials told us they had not developed a system for reviewing post award documents because GSA views it as primarily the responsibility of the agency with the delegated authority to ensure they comply with the 2014 Bulletin’s post award requirements. Further, according to GSA officials, GSA’s primary role in the lease delegation process is to review and approve requests for delegated leasing authority. As a result, GSA officials have determined that regularly reviewing post award documents is not the best use of their already constrained resources. However, GSA’s reliance on agencies to comply with all requirements absent any mechanism to ensure post award accountability could allow agencies to lease space outside of the delegated authority granted to them. GSA’s previously mentioned, ad hoc audits of USDA and BIA delegated leases reinforced the need for strengthened oversight to ensure that leases meet requirements, as both audits found problems. For example, the DOI’s OIG confirmed in 2014 GSA’s findings that BIA approved $32.7 million in delegated lease agreements that exceeded GSA square footage and purchase approval limits. GSA’s review of USDA’s delegated leases also found that approximately 540 lease files were missing the awarded lease documents in G-REX. In addition, the review found that no file, in its sample of 27 lease files, had all the required documents uploaded in G-REX. Furthermore, among our selected delegated leases, we found instances of agencies not uploading post award documents to G-REX after the lease was awarded. For example, one delegated lease file in our sample was still missing the executed lease over 2 years after the lease was signed. If post award documents are not uploaded as required, GSA may not even have the documentation necessary to determine if a delegated lease met program requirements and was within the authority granted. Even if all post award documents are uploaded, GSA still cannot verify that the leases were executed within the parameters of the granted delegated leasing authority and in accordance with program requirements without a systematic process for reviewing post award documentation. For example, as noted above, if GSA assessed a selection of delegated leases’ post award documents, it may have identified the missing executed lease and other deficiencies noted above and been able to notify the agency. Further, GSA cannot ensure that agencies are preventing fraud, waste, or abuse. Conclusions GSA oversees the delegated leasing program and is a guarantor of the government’s monetary obligations under a delegated lease in the event of default. However, if not properly managed, delegated leases run the risk of not being cost effective for the federal government. GSA has taken some actions to address previously identified issues with the program, but its current oversight and management of the program is compromised by a lack of key processes that make it unable to ensure the program is working as intended. Because GSA has not determined how to reasonably reconcile G-REX and FRPP data, pursuant to its own procedure, it is missing oversight opportunities, such as finding leases with annual rent or lease terms that do not meet program requirements. Additionally, without a way to regularly assess agencies’ policies and procedures to manage their delegated leasing activities or track their performance in meeting key management goals, GSA cannot be sure agencies can sufficiently manage their leases or secure cost-effective rates. Periodic reviews of an agency’s ability to manage its delegated leasing activities would help GSA ensure that it is providing this authority to agencies that can manage it effectively and efficiently. Finally, without a systematic process for monitoring a selection of submitted post award documents to help identify and promptly resolve issues and ensure compliance with existing legislation, regulations, and standards, GSA cannot ensure that delegated leases comply with the terms of the delegation and the program is free from fraud, waste, and abuse. Recommendations We are making the following four recommendations to GSA The Administrator of GSA should take steps to reconcile G-REX and FRPP to the extent practical. (Recommendation 1) The Administrator of GSA should develop a process for assessing at regular intervals, such as annually, agencies’ policies and procedures for managing their delegated leasing activities. (Recommendation 2) The Administrator of GSA should develop a process that would allow GSA to track agencies’ progress in meeting GSA management goals, such as cost effective lease rates, and avoiding holdovers. (Recommendation 3) The Administrator of GSA should develop a systematic, risk-based process for monitoring a selection of submitted post award documents. (Recommendation 4) Agency Comments We provided a draft of this product to GSA, VA, USDA, Interior, and Commerce for review and comment. In its comments, reproduced in appendix I, GSA concurred with the recommendations. GSA and USDA provided technical comments, which we incorporated as appropriate. VA, Interior, and Commerce did not have comments. We are sending copies of this report to the appropriate congressional committees, the Administrator of the General Services Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or RectanusL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the U.S. General Services Administration Appendix II: Contact and Staff Acknowledgements Contact Lori Rectanus, (202) 512-2834 or RectanusL@gao.gov. Staff Acknowledgements In addition to the individual named above, other key contributors to this report were Keith Cunningham, Assistant Director; Sarah Jones, Analyst in Charge; Eli Albagli; Lacey Coppage; Josh Ormond; Colleen Taylor; Michelle Weathers; and Elizabeth Wood.
Why GAO Did This Study As the federal government's landlord, GSA is authorized to lease property to accommodate federal agencies. It can also delegate this authority to other agencies, though GSA is still responsible for overseeing the delegated leasing program. However, prior audits found problems with delegated leasing, including excessive rental rates and insufficient documentation to support that the government received a fair and reasonable price for the lease. GAO was asked to review GSA's delegated leasing program. This report examines: 1) GSA's efforts to reform its delegated leasing program; 2) the extent to which GSA assesses agencies' policies, procedures, and performance in managing their delegated leasing activities; and 3) the extent to which GSA ensures delegated leases meet requirements. GAO reviewed federal statutes and regulations, and GSA's guidance and data on delegated leases. To illustrate how GSA approves and oversees delegated leases, GAO judgmentally reviewed 17 delegated leases selected to include lease contract value, type of lease, and agencies with high number of delegated leases. GAO interviewed officials from GSA and the four agencies associated with GAO's selected delegated leases. What GAO Found The General Services Administration (GSA) has taken steps to reform its delegated leasing program, but data reliability issues remain. For example, GSA created GSA's Real Estate Exchange (G-REX) to centralize delegated lease requests and approvals, but GAO found G-REX had incorrect information on lease rental values and rates—reporting rates 12 times higher than they actually were. Moreover, GAO found that GSA was not annually reconciling data between G-REX and the government-wide real property database, per GSA's own procedures. GSA officials said that their past efforts to fully reconcile the data were unsuccessful but acknowledged there may be ways to compare the data to improve the reliability of both datasets. Until GSA clarifies what it can do to partially reconcile the data sets, it is not obtaining the intended benefits of this data validation exercise. GSA does not know if agencies have the ability to manage their delegated leasing activities because it does not regularly assess their policies and procedures, or their performance in meeting GSA's management goals, such as avoiding extensions. GSA procedures state that GSA will consider the agency's organizational structure and ability to meet certain GSA performance measures prior to granting requests for delegated leasing authority. Moreover, federal internal control standards call for agencies to design control activities to better manage the program. However, GSA officials said that GSA relies on the agencies to oversee their own delegated leases. Nevertheless, GAO found instances of inadequate policies and procedures at one agency in managing its delegated leasing activities. Further, all 4 agencies had delegated leases that were in holdover status (occupying a space beyond the expiration of the lease term), which violates program requirements. Because GSA does not regularly assess agencies' procedures or performance, it cannot ensure that agencies are effectively managing their delegated leasing activities. GSA cannot ensure that the leases agencies execute under delegated authority meet program requirements and are within the authority granted because it lacks key procedures to do so. GAO found that GSA had only reviewed 1 percent of the post lease award documents agencies had submitted, and in some cases, agencies had not submitted required documentation. GSA officials said the agencies are responsible for ensuring that documents are submitted and requirements are met. However, a risk-based assessment of a selection of delegated leases' post award documents can provide assurances that agencies comply with existing regulations and prevent potential fraud, waste, and abuse. Because GSA did not have a process to systematically review these documents, GSA is unable to ensure that delegated leases meet requirements and that agencies are positioned to prevent fraud, waste, or abuse. What GAO Recommends GAO recommends that GSA (1) reconcile its databases; (2) regularly assess agency procedures for managing delegated leasing, (3) track agency performance, and (4) develop a review process for post lease award documents. GSA agreed with the recommendations.
gao_GAO-20-146
gao_GAO-20-146_0
Background Since the early 1980s, the Air Force has been working to modernize and consolidate its space command and control systems and improve its space situational awareness. Effective command and control systems are important because DOD space capabilities are globally distributed and operated from geographically diverse locations. With new threats against space assets, the ability to quickly respond or take action can mean the difference between mission success and failure. Space situational awareness is the current and predictive knowledge and characterization of space objects and the operational environment upon which space operations depend. Good space situational awareness data are the foundation of command and control systems because the data are critical for planning, operating, and protecting space assets and informing government and military operations. Past Command and Control Efforts The Air Force’s last three space command and control programs over more than three decades have ended significantly over budget and schedule, and key capabilities have gone undelivered. Those programs were the Cheyenne Mountain Upgrade, the Combatant Commanders’ Integrated Command and Control System, and the Joint Space Operations Center Mission System. Some capabilities were deferred from one program to the next, making the true cost growth in each program significantly higher when compared to original program content. This deferral was due in part to the complicated nature of the planned work. Enabling a single system to command and control numerous assets in space and on the ground at multiple levels of information classification is a technically challenging task. In addition, as discussed below, we found that the Air Force made optimistic cost and schedule estimates for these programs, and thus did not assign adequate resources to their development. Cheyenne Mountain Upgrade Begun in 1981, the Cheyenne Mountain Upgrade was intended to modernize systems that provide critical strategic surveillance and attack warning and assessment information. We issued 11 reports on the Cheyenne Mountain Upgrade program between 1988 and 1994. In 1991, we found that the program planned to complete only a portion of its requirements in an attempt to stay within budget and schedule constraints. We also found that the Air Force had adopted a strategy of deferring some requirements on the optimistic assumption that these requirements could be achieved during later stages of system development. We concluded that while such deferrals may have permitted the Air Force to meet revised short-term goals, they also masked the magnitude of problems the program experienced as it moved forward. We also found that DOD had not formally evaluated the performance risks related to deferring requirements and concluded that the strategy of deferral significantly raised the risk that system development would be more costly and take longer. DOD declared the program operational in 1998; however, some critical capabilities were not delivered. At that time, the program was nearly $1 billion over budget and 11 years late. That same year, DOD determined that some of the program’s components were not well integrated and would be unresponsive to future mission needs. Combatant Commanders’ Integrated Command and Control System DOD initiated the Combatant Commanders’ Integrated Command and Control System program in 2000 to modernize and integrate the Cheyenne Mountain Upgrade computer systems and to replace a space situational awareness data computer system called the Space Defense Operations Center (SPADOC). At that time, the SPADOC system was significantly overtaxed and in need of replacement by a system that could handle larger volumes of data. In 2006, we found that Combatant Commanders’ Integrated Command and Control System program costs had increased by approximately $240 million, 51 percent over initial estimates, and the program was at least 3 years behind schedule. In addition, we found that that some capabilities had been deferred indefinitely, resulting in increased risks to performing future operations. Further, we found that the Air Force did not effectively assess the appropriateness of the program’s requirements prior to initiating the program, leading to significant additions, deletions, and modifications to the program’s initial requirements. Consequently—similar to what transpired within the Cheyenne Mountain Upgrade program—significant amounts of work were deferred to address the cost increases associated with requirements changes. Ultimately, the Combatant Commanders’ Integrated Command and Control System program was not able to successfully replace SPADOC. Joint Space Operations Center Mission System Started in 2009, the Joint Space Operations Center Mission System (JMS) was the Air Force’s most recent effort to meet command and control capability and space situational awareness data needs and replace the SPADOC system. JMS was a software-intensive system and was supposed to be delivered in three increments. Increment 1 was to provide the foundational structure for the overall program. Increment 2 was to deliver numerous operational capabilities to users, including replacing SPADOC by the end of fiscal year 2014 with the ability to automatically determine if objects in space were likely to collide (called conjunction assessments), which was a key performance parameter for the program. Increment 3 was to provide additional command and control capabilities and the ability to incorporate data from highly classified special access programs. Of the three planned increments, Increment 1 is the only one that is fully operational today. JMS Increment 2 encountered significant challenges during development, and in 2016 the program experienced a critical change because of significant schedule delays and cost increases. Specifically, JMS Increment 2 planned to delay delivery by more than 1 year, in turn increasing total program costs by over 25 percent. According to the August 2016 JMS Critical Change Report, which the program office submitted to Congress in September 2016 as a result of the critical change, several issues contributed to Increment 2’s challenges. These included an overly aggressive schedule, inadequate staffing, underestimating the amount of work required to integrate various pieces of the system that were developed by different groups, and numerous concurrent development efforts. An independent program assessment team comprised of military, intelligence, and contractor staff determined that the JMS program had underestimated the complexity of developing the system. Further, the program reported that its organizational structure proved problematic. For example, the program reported that program- related contracts were awarded and administered outside the program office, which limited program flexibility and support and hampered effective oversight. As a result of the critical change, the program re- estimated its costs, established new schedule goals, and deferred a number of capabilities and requirements to Increment 3. Even after these changes, JMS Increment 2 was not successful at delivering its planned capabilities. Air Force operational testing in 2018 revealed significant issues with JMS Increment 2 performance. The Air Force’s test team determined that Increment 2 was not suitable for operations, as it was unable to provide conjunction assessments or maintain the catalog of space objects, another key performance parameter. In the wake of these findings and the numerous issues found in testing, the Air Force stopped further development on JMS Increment 2. When development ended, JMS was almost 3 years behind schedule and $139 million (42 percent) over budget. Air Force leadership placed the JMS Increment 2 program in sustainment and transferred three of the 12 planned Increment 2 capabilities into operations; the remaining nine capabilities were to be used for planning and analytic purposes only, as they were not reliable enough for operational use. Key requirements from Increment 2, including automated conjunction assessments and the ability to maintain a high-accuracy space catalog, as well as all of the requirements from Increment 3, were deferred to a subsequent effort, called the Space C2 program. SPADOC Replacement and Space C2 Because JMS was unable to replace SPADOC, the system is still in use today. Since 2000, the Air Force has been addressing unique space surveillance requirements for follow-on systems to SPADOC. Air Force officials we spoke with stated that the system’s ability to continue operations is a growing concern. While work is underway to move SPADOC onto a more modernized platform and infrastructure, the Air Force has not established a schedule for that effort. In the meantime, Air Force officials told us that large amounts of data are going unprocessed as the volume of available sensor data today is greater than ever before—and is expected to increase exponentially in the next year as new DOD sensors come online. The Space C2 program is the Air Force’s latest software-intensive program to develop capabilities to anticipate and respond to emerging threats in space and ensure the uninterrupted availability of capabilities to the warfighter. SPADOC is expected to be retired as Space C2 capabilities become operational. The Air Force expects to spend between $72 million and $108 million per year on the Space C2 program, which is managed by the Air Force’s Space and Missile Systems Center, through fiscal year 2024. The Air Force’s Space C2 Program Is in Its Early Planning Stages and Is Taking a New Approach to Software Development While it is still early in the planning and development stages, the Air Force’s Space C2 program office expects to deliver a consolidated space command and control system over the next few years using a new system design. The program also plans to use a modernized, iterative software development process called Agile development to more quickly and responsively provide capability to users. According to Air Force officials, this development approach is relatively new to DOD programs. Therefore, the Space C2 program and DOD officials are working to determine the appropriate level of detail needed for the program’s planning documents as well as the best way to provide oversight of a non-traditional development approach. The Space C2 Program Plans to Consolidate Capabilities Using a New System Design The Space C2 program is intended to consolidate operational level command and control capabilities for DOD space assets into an integrated system, allowing operators and decision makers to have a single point of access to command and control space assets around the globe in a timely manner. A consolidated space command and control capability will: allow operators to comprehensively identify and monitor threats to identify possible courses of action to mitigate or eliminate threats, communicate courses of action to decision makers, and direct action to respond to threats. A consolidated space command and control capability is necessary, according to Air Force and DOD officials we met with, because the space domain has transitioned from a benign environment to one that—like ground, sea, and air domains—is contested by foreign adversaries. According to these officials, DOD needs the ability to respond to the increased threats to U.S. space assets in near real-time. Consequently, the Air Force is planning for Space C2 program capabilities to be significantly more automated than in the past, requiring high-quality software development and architecture planning. As shown in figure 1, the Space C2 program itself will consist of multiple layers. Program officials explained that the foundational layer is the computing infrastructure, which must be secure from vulnerabilities and have adequate processing power to accommodate the complexity of the system. On this infrastructure will run the software platform, which forms the backbone of the operating system. The Space C2 program plans to procure the platform commercially. The software platform will contain standards that developers will need to comply with to create applications that will work on the platform. Some applications may be targeted to a broad number of users, and some may be more niche capabilities for a particular group of users. Space C2 program officials told us they believe this structure will allow them to be flexible in meeting multiple user needs more responsively than has been possible in past DOD programs. Users include, for example, space system operators responsible for predicting and avoiding space object collisions, and other operators responsible for responding to conflicts in space. The program also expects applications from a variety of developers, both commercial and government, to run on the platform, thus presenting opportunities for companies that do not regularly do business with DOD to participate in the program. The work being done for the Space C2 program is spread out among multiple Air Force groups. For example, the Air Force Research Laboratory has been developing applications for the Space C2 program both internally and with commercial partners since 2016. The Laboratory is also working on some battlespace awareness capabilities that may eventually run on the Space C2 program’s platform. Additionally, officials from the Air Force Rapid Capabilities Office stated that they have been working on common interface standards for applications, and this work will feed into the Space C2 program. As the Enterprise Manager, the Space C2 program manager is responsible for integrating all of the development work selected for use in the Space C2 program, irrespective of its origin. A principal component of the Space C2 program is a data repository that will be populated with data from a wide variety of commercial, civil, military, and intelligence space sensors. Eventually the program plans for operators using the Space C2 program’s platform and applications to be able to retrieve data from the data repository. The data will be electronically tagged with its appropriate classification level and will be accessible to users according to their individual security clearances. The overall design of the Space C2 program is for data to be gathered from sensors, placed into the data repository, and then be available for various applications to process and provide timely information to space operators and commanders on threats to space assets and anomalies in the space environment. Operators and commanders will then be able to promptly direct actions, such as tasking sensors to collect additional data or respond to threats. Figure 2 shows the proposed construct of the Space C2 program, including the various actions that can be taken in response to the data collected by the sensors. The Air Force Plans to Use an Agile Software Development Approach for Space C2 Development The Space C2 program is planning to use an approach new to DOD in terms of software development, known as Agile. Agile development is a flexible, iterative way of developing software that delivers working capabilities to users earlier than the traditional, incremental DOD software development processes, known as the waterfall approach. Agile practices integrate planning, design, development, and testing into an iterative life cycle to deliver software early and often, such as every 60-90 days. The frequent iterations of Agile development are intended to effectively measure progress, reduce technical and programmatic risk, and be responsive to feedback from stakeholders and users. This is different from the way DOD has developed software in the past, in which requirements were solidified in advance of development and the software was delivered as a single completed program at the end of the development cycle—with no continual involvement or feedback from users or ability to modify requirements. Traditional software development mirrored the development of a hardware system. We have previously reported on past DOD software programs that experienced challenges due, in part, to that traditional development approach. The differences between the two approaches are illustrated in figure 3. The Space C2 program is one of the first DOD software-intensive programs to move away from the traditional approach and into the more modernized Agile development methodology. Program officials told us that many of the problems with JMS’s development stemmed from its more traditional approach, and that with the Space C2 program they wanted to avoid circumstances that did not lead to program success. Considering that past software development problems were caused, at least in part, by the traditional method of software development, utilizing a different approach could be a positive step. However, the current DOD acquisition instruction does not include guidance for Agile software programs. According to DOD officials, new software guidance is in development, and this guidance is expected to offer pathways for developing Agile programs. DOD has also developed a draft template to assist Agile programs with developing their acquisition strategies, though the template and associated software guidance are in the early stages of development. In the meantime, however, Space C2 program officials confirmed that they are currently operating without specific software acquisition guidance. Space C2 officials also clarified that while official Agile software acquisition guidance has not yet been formally published, the program office has been actively engaged with the Office of the Under Secretary of Defense for Acquisition and Sustainment on refining draft policy and guidance. The program office noted that its program activities over the past year have been informed by and are consistent with this draft guidance. The Space C2 program has submitted preliminary planning documents to the Under Secretary of Defense for Acquisition and Sustainment for approval. While officials in the Under Secretary’s office expect these documents to be modified and expanded upon in late 2019, the Under Secretary gave the program approval to begin its development under an Agile process, signifying her support for using alternative approaches. In addition, Air Force officials told us that the Commander of Air Force Space Command has requested frequent briefings on the program’s development process, and while he does not have approval authority over the program, he is monitoring the program closely. Plans show that the program is conducting 90-day development iterations with the goal of providing working software at the end of each cycle. As of August 2019, the program had completed three program development iterations, and reported delivering capabilities which included: expanding the commercial data available in the data repository; tasking various sensors; and providing a tool for visualization and analytics. The Air Force noted that these capabilities were deployed in a relatively short time; however, most capabilities delivered so far are considered to be available for use “at your own risk,” since they have not yet been fully approved for use in operations. Though the Air Force has not yet published a time frame for certifying these capabilities for operational use, the new development approach is underway and delivering some early capabilities. DOD officials noted that the foundational elements of the Space C2 system, including the infrastructure and software platform, should be completed prior to significant application development; however, at this early stage of the program, the schedule indicating the time frame in which these elements will be completed appears to be still in development. DOD Is Establishing Agile Software Development Expertise For government programs, some level of insight and oversight is essential when using public funds to develop a system. According to DOD officials, DOD is embracing Agile development because software can be delivered quickly and can be more responsive to user needs. However, according to GAO’s upcoming guide for assessing Agile development programs, known as the Agile Assessment Guide, sound engineering principles are still beneficial when employing this approach. For example, continuous attention to technical excellence and good design requires the developers to consider security requirements throughout development. This is particularly true with complex programs that process sensitive data with complex security requirements. In past work, we have found that teams overlooking security requirements may end up developing systems that do not comply with current federal requirements (for example cybersecurity requirements for information technology programs), resulting in the software not becoming operational until these components are addressed. In addition, the Agile Assessment Guide notes that transitioning to Agile software development can be challenging because Agile methods require organizations to do more than implement new tools, practices, or processes. Agile requires a re-evaluation of existing organizational structures, planning practices, business and program governance, and business measures, in addition to technical practices and tools. However, Agile does not mean eliminating the need for documentation, planning, oversight, architecture, risk analysis, or baseline schedule, for example. Leading practices for Agile software development—as described in GAO’s upcoming Agile Assessment Guide—state that, among other things, programs should have the following characteristics: a product owner who manages the requirements prioritization, communicates operational concepts, and provides continual feedback to the development team; staff who are appropriately trained in Agile methods; management that has established an Agile supportive environment; a program strategy that reflects the mission, architectural, safety- critical components, and dependencies; organization’s acquisition policy and guidance that require the contract type and the acquisition strategy to be aligned to support Agile implementation; an architecture that is planned upfront to enable flexibility and to provide support to Agile methods; and mission goals that drive the prioritization of the most advantageous requirements (e.g., security and privacy) that are well understood and reviewed throughout development. Recognizing the need to change traditional processes to accommodate more iterative software development, both the Air Force and Under Secretary of Defense for Acquisition and Sustainment have created software advisor positions. The Air Force Chief Software Officer and the Special Assistant for Software Acquisition are working to improve and modernize the way DOD acquires software. In addition, DOD is looking into how to use industry practices to modernize the way it develops software. For example, the Office of the Secretary of Defense has a Development Security Operations (DevSecOps) pathfinder program for software, which helps programs define and develop a technical digital roadmap and leverages industry and Office of the Secretary of Defense expertise in developing appropriate infrastructure for software programs. The DevSecOps concept emphasizes rapid prototyping, security, and continuous integration and delivery of software products. In a May 2019 Acquisition Decision Memorandum, the Under Secretary of Defense for Acquisition and Sustainment directed the Space C2 program to become a pathfinder program. This is a positive step, because it should increase input into the program’s acquisition planning by the Office of the Secretary of Defense software development experts. The Office of the Secretary of Defense has other groups that draw on private-sector software development expertise to help DOD programs, including the Defense Digital Service and the Defense Innovation Board. These groups’ missions include improving DOD’s technology and innovation, and the groups can be valuable DOD resources for helping the Space C2 program develop its plans and Agile processes. The Defense Innovation Board conducted a review of some of the Space C2 program’s software acquisition plans in December 2018. According to the Office of the Secretary of Defense officials we spoke with, this informal review was beneficial and resulted in real-time feedback on the approach the program was taking, as well as suggestions for areas to focus on. In the May 2019 memorandum, the Under Secretary of Defense for Acquisition and Sustainment noted that in October 2019 she will determine if an independent technical assessment of the Space C2 program is necessary. Considering the stated benefits of the prior Defense Innovation Board review of the Space C2 program, as well as the fact that using Agile processes for a DOD program is relatively new and includes many unknowns, independent reviews could help ensure the program is on a successful path. As the Office of the Secretary of Defense and the Air Force have made an effort to increase in-house Agile software development expertise, programs like the Space C2 program—especially in light of its early stage of development—could benefit from periodic attention from the experts at its disposal, including input from independent, external reviews to help ensure the necessary software development steps are taken to set programs up for success. DOD programs following traditional acquisition processes conduct internal reviews at major milestones, and GAO best practices for knowledge-based acquisitions also include conducting independent program reviews at these milestones. The draft GAO Agile Assessment Guide notes that while traditional DOD program milestone reviews are not used for Agile programs, Agile programs rely on other review methods such as stakeholder demonstrations and retrospective program reviews during each iteration of work. In addition, the GAO Schedule Assessment Guide, which identifies best practices for managing a program’s schedule, states that programs should conduct periodic reevaluations of risks, and that an independent view in this is valuable. Such reviews offer greater objectivity, as the reviewers are not responsible for the activities being evaluated, and programs benefit from the wide variety of expertise and experience represented by the external review team. In addition, in many cases, having these external reviews periodically can prove useful. The Air Force’s Space C2 Program Faces Challenges in Multiple Areas and Plans Are Underway to Address Some, but Not All of Them The Space C2 program faces a number of management, technical, and workforce challenges. Some of these challenges may ultimately be overcome by time and experience, and the Air Force has efforts underway to mitigate others in the near-term. But it is too early to determine whether these efforts will be sufficient to achieve program success. Management Challenges The Space C2 program faces several management challenges. The Air Force has been working on developing various parts of the Space C2 program since 2016, but as previously noted, the program is working from a draft acquisition strategy and does not yet have an overall program architecture. These plans are important for providing direction for a program and facilitating effective oversight by establishing a business case for the effort. A business case establishes that the program is necessary and that it can be developed with the resources available, and typically includes: a requirements document, an acquisition strategy, sound cost estimates based on independent assessments, and a realistic assessment of risks, including those relating to technology and schedule. In addition, according to Air Force officials, the Space C2 Enterprise Manager has management responsibility—but not authority—over multiple development efforts included in the Space C2 enterprise. For example, technology maturation and risk reduction activities are divided across three program offices, managed by two program executive officers, and reliant upon multiple sources of information. This division of work is being done in part because the various organizations have areas of expertise that the program was hoping to leverage. However, such distribution of activities among many organizations can result in synchronization and coordination challenges. JMS’s development was hampered by similarly-split responsibilities for development contracts for various efforts. Because space is becoming an increasingly contested domain, DOD has noted that its ability to effectively respond to space threats has increased the importance of focused leadership in national security space, to include Space C2. See table 1 for additional details of management challenges facing the Space C2 program. According to officials from the Space C2 program and the Office of the Secretary of Defense, the Space C2 program was allowed to begin development work without an acquisition strategy, due to the program’s urgency. In May 2019, the Under Secretary of Defense for Acquisition and Sustainment tasked the Space C2 program office with revising its preliminary acquisition strategy to be consistent with DOD’s draft template for software acquisition. DOD’s draft template contains specific elements for ongoing planning and evaluation that are to be included in DOD software acquisition strategies moving forward, including acquisition and contracting approach; program management structure, including authorities and oversight plans for platform and infrastructure development; requirements management and development approach, and plans for prioritization; risk management plans, including how the program will identify and mitigate risks; metrics for measuring quality of software, and how those results will be shared with external stakeholders; manpower assessment identifying program workforce needs and state of expertise in Agile methods; requirements for reporting program progress to decision makers; and yearly funding levels. We have also noted these factors in our previous reports that identify the need to develop a sound, executable business case at the outset of a program, and the importance of using knowledge-based decision making in DOD acquisition programs. In addition, our work on best practices for knowledge-based acquisitions has emphasized that the success of any effort to develop a new product hinges on having the right knowledge at the right time, and that a better opportunity exists to meet program goals when the knowledge is available early. However, given that DOD’s draft template is still subject to change, including these elements in the finalized acquisition strategy would help position the program for success. Technical Challenges The Space C2 program also faces significant technical challenges, as described in table 2. For example, the program is planning to meet previously deferred requirements that proved too complex for prior programs to achieve. It also plans to address new and emerging threats to space assets, for which requirements are not yet defined. In addition, the program plans to use an Agile software development approach, the processes of which DOD has yet to show proficiency in applying, as discussed above. Integration of the multiple types of software planned for Space C2 is also likely to present considerable technical challenges. Further, cybersecurity is a growing concern for DOD space programs, including Space C2. Workforce Challenges In addition to the management and technical risks we identified, limited availability of staff with expertise in Agile software development poses a challenge to the Space C2 program and to DOD in general. The Space C2 program manager stated that the program is undertaking an effort that is fast-paced in nature and needs to be rapidly fielded, and she expressed confidence in her staff’s abilities to meet the development demands. However, various DOD officials told us that a lack of qualified software developers within DOD, and within the Space C2 program, is an issue. Agile software development methods are different from the traditional approaches used by DOD, and according to DOD officials, proficiency in Agile methods requires specific training. Software developers with this training are in high demand in the private sector, and according to DOD officials, sufficient numbers may not be immediately available for the Space C2 program. One industry best practice for software development states that to be successful, programs should ensure that each development team has immediate access to people with specialized skills including contracting, architecture, database administration, development, quality assurance, operations (if applicable), information security, risk analysis, and business systems analysis. As early as March 2009, DOD acknowledged it had a top priority to establish a cadre of trained information technology professionals, and that the lack thereof was a significant impediment to successful implementation of any future software development process. Furthermore, a 2018 Defense Science Board report highlights the lack of Agile software expertise in DOD, citing no modern software expertise in program offices or the broader acquisition workforce. Moreover, the report states that DOD defense prime contractors need to build their own internal competencies in modern software methodologies. Similarly, we found in March 2019 that DOD faces several challenges related to hiring, assigning, and retaining qualified personnel to work on space acquisition programs, similar to the challenges it faces more generally with the acquisition workforce. We also noted that DOD is taking steps to address these challenges where possible. In May 2019, the DOD’s Defense Innovation Board issued a congressionally mandated study on software acquisition and practices. The report stated that numerous past studies have recognized the deficiencies in software acquisition and practices within DOD. The report also noted the importance of digital talent and stated that DOD’s current personnel processes and culture will not allow its military and civilian software capabilities to grow fast or deep enough to meet its mission needs. In addition, the report stated that new mechanisms are needed for attracting, educating, retaining, and promoting digital talent and for supporting the workforce to follow modern practices, including developing software in close coordination with users. Finally, the report emphasized that the military services and Office of the Secretary of Defense will need to create new paths for digital talent (especially internal DOD talent) by establishing software development as a high-visibility, high-priority career track and increasing the level of understanding of modern software within the acquisition workforce. This is the case for all DOD space programs, including Space C2. Conclusions DOD’s ability to command and control U.S. space assets, as well as anticipate and respond to the threats these assets face, is critical. However, over more than three decades, DOD’s efforts to improve its space command and control capabilities—commensurate with the space threats that have continued to grow in frequency and type—have been fraught with development problems. The Air Force has again undertaken a program to meet the nation’s ongoing and future consolidated command and control needs, while trying to overcome past problems with a modern software development process. The Space C2 program is making a concerted effort to learn from past software development mistakes while forging a new path for Agile development. Though DOD is taking steps to ensure that the Space C2 program has a comprehensive approach in place for managing, identifying, and mitigating challenges associated with this approach, key program plans and agency-wide guidance are still in draft form, leaving uncertainty as to how program development and oversight will ultimately proceed. Finalizing a robust acquisition strategy containing the key elements for ongoing planning and evaluation would position the program for success. Striking the right balance between trying new development methods and working within DOD’s knowledge-based framework will be essential for meeting cost, schedule, and performance goals. Periodic assessments of the program’s approach to developing software, done by independent software development experts, could not only help ensure the reviews are balanced, but would also help ensure the Space C2 program effectively addresses the challenges it faces and is situated for success. Such reviews would also help the Space C2 program to identify potential roadblocks, and ultimately, potential solutions. Effectively addressing the challenges facing the Space C2 program will help ensure that needed space command and control capabilities are no longer deferred, but actually delivered. Recommendations for Executive Action We are making two recommendations to the Department of Defense. The Under Secretary of Defense for Acquisition and Sustainment should ensure that the Air Force’s finalized Space C2 program’s acquisition strategy includes, at a minimum, the following elements: acquisition and contracting approach; program management structure, including authorities and oversight plans for platform and infrastructure development; requirements management and development approach, and plans for prioritization; risk management plans, including how the program will identify and mitigate risks; metrics for measuring quality of software, and how those results will be shared with external stakeholders; manpower assessment identifying program workforce needs and state of expertise in Agile methods; requirements for reporting program progress to decision makers; and yearly funding levels. (Recommendation 1) The Under Secretary of Defense for Acquisition and Sustainment should ensure that the Air Force’s Space C2 program conducts periodic independent reviews to assess the program’s approach to developing software and provide, as needed, advice to the program and recommendations for improving the program’s development and progress. Participants could include, but are not limited to, officials from the Defense Innovation Board, the Defense Digital Service, the office of the Air Force Chief Software Advisor, and the Under Secretary of Defense for Acquisition and Sustainment’s Special Assistant for Software Acquisition. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this product to the Department of Defense for comment. In its comments, reproduced in appendix II, DOD concurred with our recommendations. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Air Force, and the Under Secretary of Defense for Acquisition and Sustainment. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or ChaplainC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The House Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 contained a provision for us to review the Department of Defense’s (DOD) efforts to develop space command and control capabilities. This report (1) assesses the status of and plans for ongoing Air Force efforts to develop advanced command and control capabilities for space, and (2) identifies challenges the Air Force faces in developing these capabilities. To assess the status of and plans for ongoing Air Force efforts to develop advanced command and control capabilities for space, we analyzed Air Force Space Command and Control (C2) Program Increment Demonstration and Planning Retrospective reports for the first three increments and examined acquisition strategies for relevant programs, including acquisition strategies and addenda for Joint Space Operations center (JSPOC) Mission System (JMS) Increments 1 and 2. We also examined the Air Force’s draft acquisition strategy for Space C2 and DOD’s draft acquisition strategy for Major Agile Software Programs; reviewed Space C2 document mapping planned capabilities to the specific requirements that will be met by program deliveries; and analyzed status updates from the Space C2 program and the Combined Space Operations Center and program update briefings prepared for congressional staff by the JMS and Space C2 programs and the National Space Defense Center. In addition, we analyzed Space C2 program plans in conjunction with interim DOD guidance for Agile Software Acquisition, the Joint Chiefs of Staff Cyber Survivability Endorsement Implementation Guide, the Office of the Secretary of Defense guidance on cybersecurity operational test and evaluation procedures in acquisition programs and DOD Enterprise Development, Security and Operations (DevSecOps) processes; and examined the Principal DOD Space Advisor’s Capabilities Based Assessment which included issues relating to Space C2. We also reviewed Air Force Broad Agency Announcements and Requests for Information for Space Battle Management Command and Control and Space Situational Awareness capability development. In addition, we obtained information from 12 of the 16 companies with whom the Air Force is working to obtain their perspectives of the Air Force’s approach to developing Space C2 capabilities. To identify challenges the Air Force faces as it develops advanced command and control capabilities for space, we analyzed the JMS Critical Change Certification; examined Joint Requirements Oversight Council memoranda pertaining to the JMS critical change management and certification; reviewed the Air Force’s Space and Missile Systems Center evaluation of commercial capability gaps and capabilities; reviewed the JMS Program Manager briefing on lessons learned; and examined the DOD test and evaluation report on JMS Increment 2 (Service Pack 9). We also reviewed a selected chapter of GAO’s draft Agile Assessment Guide (Version 13), which is intended to establish a consistent framework based on best practices that can be used across the federal government for developing, implementing, managing, and evaluating agencies’ information technology investments that rely on Agile methods. To develop this guide, GAO worked closely with Agile experts in the public and private sector; some chapters of the guide are considered more mature because they have been reviewed by the expert panel. We reviewed this chapter to ensure that our expectations for how the Air Force should apply best practices for development of software capabilities for space command and control are appropriate for an Agile program and are consistent with the draft guidance that is under development, and we compared Space C2 program plans to the practices outlined in the guide. Additionally, since Agile development programs may use different terminology to describe their software development processes, the Agile terms used in this report are specific to the Space C2 program. We also compared Air Force development plans with interim and established DOD guidelines for software development, and GAO best practices for knowledge-based decision-making in weapons system development. We also reviewed prior GAO reports on the Cheyenne Mountain Upgrade, the Combatant Commanders’ Integrated Command and Control System, software acquisition, and cybersecurity. Additionally, we interviewed DOD officials from the Office of the Under Secretary of Defense for Acquisition and Sustainment; Joint Chiefs of Staff, Force Structure, Resources, and Assessment Directorate; U.S. Strategic Command; Air Force Combined Space Operations Center; Defense Advanced Research Projects Agency; Missile Defense Agency; Office of the former Principal DOD Space Advisor; Air Force Space Command; Air Force Research Laboratory; Defense Digital Service; Office of Cost Assessment and Program Evaluation; Air Force Rapid Capabilities Office; National Space Defense Center; and Air Force Space and Missile Systems Center. Finally, we interviewed officials from commercial companies that are known in the space community to have potential input into the development of space command and control capabilities to understand how the Space C2 program plans to integrate commercial capabilities into the program. We conducted this performance audit from January 2018 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Rich Horiuchi, Assistant Director, Emily Bond, Claire Buck, Maricela Cherveny, Burns Eckert, Laura Hook, and Roxanna Sun made key contributions to this report. Assistance was also provided by Pamela Davidson, Kurt Gurka, Jennifer Leotta, Harold Podell, Marc Schwartz, James Tallon, Eric Winter, and Alyssa Weir.
Why GAO Did This Study Since the early 1980s, the Air Force has been working to modernize and consolidate its space command and control systems into a single comprehensive platform. The past three programs to attempt this have ended up significantly behind schedule and over budget. They also left key capabilities undelivered, meeting the easier requirements first and deferring more difficult work to subsequent programs. At the same time, the need for a consolidated space command and control capability has been growing. The House Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 contained a provision for GAO to review DOD's newest efforts to develop space command and control capabilities. This report describes the status of these efforts and identifies challenges the Air Force faces in bringing them to fruition. To conduct this work, GAO analyzed acquisition and strategy documentation, management directives, and lessons learned; and compared Air Force development plans with leading industry practices for software development, DOD guidelines, and best practices included in a draft GAO guide for assessing Agile software development programs. What GAO Found Given emerging and evolving threats in the space domain, as well as significant development problems in similar prior efforts, the Air Force is prioritizing the Space Command and Control (C2) program. Early prototype work on the program's software began in 2016. As of mid-2019, the program had delivered some initial capabilities; however, the capabilities delivered so far are not approved for use in operations. Because the program is still early in development, it has not yet established a time frame for certifying these capabilities for operational use. Further, the foundational elements of the program, including the infrastructure and software platform, are still being conceptualized. All Space C2 program capabilities will be significantly more automated than past development efforts and are being designed to allow operators to identify and monitor threats to U.S. space assets, identify courses of action to mitigate or eliminate those threats, communicate these actions to decision makers, and direct actions in response. To develop Space C2's technologically complex software, the Air Force is following a modernized, iterative process called Agile development—a relatively new approach for Department of Defense (DOD) programs (see figure). The Space C2 program is facing a number of challenges and unknowns, from management issues to technical complexity. Additionally, DOD officials have not yet determined what level of detail is appropriate for acquisition planning documentation for Agile software programs. They are also not certain about the best way to provide oversight of these programs but are considering using assessments by external experts. These knowledge gaps run counter to DOD and industry best practices for acquisition and put the program at risk of not meeting mission objectives. Additionally, software integration and cybersecurity challenges exist, further complicating program development. The Air Force has efforts underway to mitigate some of these challenges in the near term, but until the program develops a comprehensive acquisition strategy to more formally plan the program, it is too early to determine whether these efforts will help to ensure long-term program success. What GAO Recommends GAO is making two recommendations, including that DOD should ensure the Air Force develops a comprehensive acquisition strategy for the Space C2 program. DOD concurred with the recommendations.
gao_GAO-19-377
gao_GAO-19-377_0
Background The NASA Authorization Act of 2010 directed NASA to develop a SLS, to continue development of a crew vehicle, and to prepare infrastructure at Kennedy Space Center to enable processing and launch of the launch system. To fulfill this direction, NASA formally established the SLS launch vehicle program in 2011. Then, in 2012, NASA aligned the requirements for the Orion program with those of the newly created SLS vehicle and the associated ground systems programs. The Exploration Systems Development (ESD) organization reports to NASA’s Associate Administrator for Human Exploration and Operations Mission Directorate and is responsible for managing and integrating the human space exploration programs. Figure 1 provides details about each SLS hardware element and its source as well as identifies the major portions of the Orion spacecraft. NASA established the EGS program to modernize the Kennedy Space Center to prepare for integrating hardware, as well as processing and launching SLS and Orion, and recovery of the Orion crew capsule. The EGS program consists of a number of components and processing centers including the Vehicle Assembly Building, Mobile Launcher, and Crawler-Transporter. The Mobile Launcher consists of (1) a two-story base that is the platform for the rocket and (2) a tower equipped with a number of connection lines, called umbilicals, and launch accessories that will provide SLS and Orion with power, communications, coolant, fuel, and stabilization prior to launch. During preparations for launch, the Crawler-Transporter will pick up and move the Mobile Launcher into the Vehicle Assembly Building. Inside the Vehicle Assembly Building, NASA will stack the SLS and Orion vehicle on the Mobile Launcher and complete integration for launch. Before launch, the Crawler-Transporter will carry the Mobile Launcher with SLS and Orion to the launch pad where engineers will lower the Mobile Launcher on to the pad and remove the Crawler-Transporter. During launch, each umbilical and launch accessory will release from its connection point, allowing the rocket and spacecraft to lift off from the launch pad. Figure 2 is a picture of the Mobile Launcher positioned on top of the Crawler-Transporter outside of the Vehicle Assembly Building. During Exploration Mission 1 (EM-1), the SLS vehicle is to launch an uncrewed Orion to a distant orbit some 70,000 kilometers beyond the Moon. All three programs—SLS, Orion, and EGS—must be ready on or before the EM-1 launch readiness date to support this integrated test flight. Exploration Mission 2 (EM-2) will be a 10- to 14-day crewed flight with up to four astronauts that will orbit the moon and return to Earth to demonstrate the baseline Orion vehicle capability. History of Program Cost and Schedule Changes NASA establishes an agency baseline commitment—the cost and schedule baselines against which the program may be measured—for all projects that have a total life cycle cost of $250 million or more. A rebaseline is a process initiated if the NASA Administrator determines the development cost growth is more than 30 percent of the estimate provided in the baseline of the report, or if other events make a rebaseline appropriate. A replan is a process generally driven by changes in program or project cost parameters, such as if development cost growth is 15 percent or more of the estimate in the baseline report or a major milestone is delayed by 6 months or more from the baseline date. A replan does not require a new project baseline to be established. When the NASA Administrator determines that development cost growth is likely to exceed the development cost estimate by 15 percent or more, or a program milestone is likely to be delayed from the baseline’s date by 6 months or more, NASA must submit a report to the Committee on Science, Space, and Technology of the House of Representatives and the Committee on Commerce, Science, and Transportation of the Senate. Should a program exceed its development cost baseline by more than 30 percent, the program must be reauthorized by the Congress and rebaselined in order for the contractor to continue work beyond a specified time frame. NASA tied the SLS and EGS program cost and schedule baselines to the uncrewed EM-1 mission and the Orion program’s cost and schedule baselines to EM-2. Over the past 5 years, we have issued several reports assessing the progress of NASA’s human space exploration programs relative to their agency baseline commitments. In April 2017, we found that given the combined effects of ongoing technical challenges in conjunction with limited cost and schedule reserves, it was unlikely that these programs would achieve the committed November 2018 launch readiness date. We recommended that NASA confirm whether this launch readiness date was achievable and, if warranted, propose a new, more realistic EM-1 date and report to Congress on the results of its schedule analysis. NASA agreed with both recommendations and stated that it was no longer in its best interest to pursue the November 2018 launch readiness date. Subsequently, NASA approved a new EM-1 schedule of December 2019, with 6 months of schedule reserve available to extend the date to June 2020, and revised costs (see table 1). Because NASA delayed the EM-1 schedule by up to 19 months, the SLS and EGS programs—that are both baselined to EM-1—reported a replan to the Congress. The EGS program also reported its development costs increased by about 23 percent over the baseline. At the same time, NASA reported that the SLS program development costs would only increase by about 2 percent. Contracts Under the Federal Acquisition Regulation (FAR), a variety of contract types are available including those that incentivize a contractor in areas that may include performance, cost, or delivery. The type of contract used for any given acquisition inherently determines how risk is allocated between the government and the contractor. According to the FAR, since the contract type and the contract price are interrelated, the government must consider them together. The government can choose a contract type and negotiate price (or estimated cost and fee) that will result in reasonable contractor risk and provide the contractor with the greatest incentive for efficient and economical performance. For example, under firm-fixed-price contracts, the contractor assumes full responsibility for performance costs. Under cost-reimbursement contracts, the government provides for the payment of allowable incurred costs, to the extent prescribed in the contract. The government uses cost-reimbursement contracts when, for example, there are uncertainties involved in contract performance. Incentive contracts can be either fixed-price or cost-reimbursement type contracts. The contractor’s responsibility for the performance costs and the profit or fee incentives in incentive contracts are tailored to the uncertainties involved in contract performance. Incentive contracts— including award fee and predetermined, formula-type incentive fee contracts—are designed to attain specific acquisition objectives by, in part, including appropriate incentive arrangements that (1) motivate contractor efforts that might not otherwise be emphasized, and (2) discourage contractor inefficiency and waste. Award fees generally emphasize multiple aspects of contractor performance in areas that the government assesses more subjectively. In contrast, predetermined formula-type incentives are generally associated with a cost incentive, but can also emphasize performance in areas that the government assesses more objectively. The FAR indicates that award fee contracts are suitable when it is neither feasible nor effective to devise predetermined objective incentive targets, the likelihood of meeting acquisition objectives will be enhanced by using a contract that provides the government with the flexibility to evaluate both actual performance and the conditions under which it was achieved, and the administrative effort and cost are justified. Table 2 provides an overview of cost-plus-incentive-fee and cost-plus- award-fee contracts because these are the type used in the Orion and SLS programs. Multiple-incentive contracts contain more than one incentive. For example, these contracts may include both subjective award fee criteria and predetermined, formula-type incentives. Agencies can use incentive contracts to promote certain acquisition outcomes, such as keeping costs low, delivering a product on time, and achieving technical performance of the product. NASA awarded incentive contracts to both Boeing and Lockheed Martin—a cost-plus-incentive-fee/award-fee contract to Boeing for the SLS stages effort and a cost-plus-award-fee contract to Lockheed Martin for the Orion crew spacecraft effort. For the SLS stages incentive contract with Boeing, the contract includes both incentive and award fees, broken into these three components: Milestone-incentive fees. These fees are paid for successful completion of each program milestone event. Cost-incentive fees. These fees are initially negotiated and later adjusted by a formula and are paid based on the relationship of total allowable costs to total target costs. Award fees. These fees are determined through subjective evaluations relative to factors in the contract’s award fee plan. For the Orion crew spacecraft incentive contract with Lockheed Martin, the contract includes fee broken into three components. The government typically uses award fees when it is not feasible or effective to use predetermined objective criteria. Therefore, as noted above, award fees are typically determined against subjective criteria. However, this contract includes award fee with both subjective and objective criteria: Milestone award fees. These fees are paid for completing critical criteria and dates associated with each milestone. Performance incentive fee. These fees are paid for completing criteria and dates associated with each performance incentive. Period of performance award fee. These fees are determined through subjective evaluations relative to factors in the contract’s award fee plan. For purposes of discussion within this report, we group each of the fees for each contract into one of four categories—milestone fee, performance incentive fee, cost incentive fee, and award fee. When award fees are used that require a subjective assessment by the government, NASA generally defines award fee periods of at least 6 months for the duration of the contract and establishes performance evaluation boards to assess the contractor’s performance relative to the performance evaluation plan. For the contracts we reviewed, NASA evaluates contractor performance based on weighted evaluation factors to determine the award fee. Table 3 includes a description of the evaluation factors and the weighted percentages for each factor assigned to the SLS stages and Orion crew vehicle contracts. When developing a contractor’s evaluation for a period of performance, the members of the performance evaluation boards for each contract use descriptive ratings in their evaluations. Performance monitors for different areas within the programs compile a list of the contractor’s strengths and weaknesses relative to specific criteria and defined activities for each of the evaluation factors. The performance monitors then consider other factors, such as government-directed changes and obstacles that arose that may have affected the contractor’s performance, and prepare performance reports. Members of the performance evaluation boards consider the performance monitor’s reports and assign the scores and descriptive ratings for the specific evaluation period. Table 4 below outlines award fee adjectival ratings, award fee pool available to be earned, and descriptions of the award fee adjectival ratings from the Federal Acquisition Regulation. Continued Underperformance Has Led to Additional Schedule Delays and Cost Growth In November 2018—within 1 year of announcing a delay for the first mission—senior NASA officials acknowledged that the revised EM-1 launch date of December 2019 is unachievable and the June 2020 launch date (which takes into account schedule reserves) is unlikely. These officials estimate that there are 6 to 12 months of schedule risk associated with this later date, which means the first launch may occur as late as June 2021 if all risks are realized. This would be a 31-month delay from the schedule originally established in the programs’ baselines. Officials attribute the additional schedule delay to continued production challenges with the SLS core stage and the Orion crew and service modules. NASA officials also stated that the 6 to 12 months of risk to the launch date accounts for the possibilities that SLS and Orion testing and final cross-program integration and testing at Kennedy Space Center may result in further delays. These 6 to 12 months of schedule risk do not include the effects, if any, of the federal government shutdown that occurred in December 2018 and January 2019. In addition, NASA’s reporting of cost data for the SLS and Orion programs is not fully transparent. NASA’s estimates for the SLS program indicate 14.7 percent cost growth as of fourth quarter fiscal year 2018, but our analysis shows that number increases to 29.0 percent when accounting for costs that NASA shifted to future missions. Further, in summer 2018, NASA reported a 5.6 percent cost growth for the Orion program. However, this reported cost growth is associated with a program target launch date that is 7 months earlier than its agency baseline commitment launch date. If the Orion program executes to the launch date established in its agency baseline commitment, costs will increase further. SLS: First Mission Will Incur Additional Delay as Challenges with Core Stage Production Continue, and Cost Growth Underreported SLS Will Not Meet June 2020 Replan Schedule The SLS program will not meet the June 2020 launch date for the first mission due, in part, to ongoing development issues with the core stage. For this mission, the SLS launch vehicle includes solid rocket boosters, an upper stage, and a core stage—which includes four main engines and the software necessary to command and control the vehicle. As of fall 2018, the program reported that the boosters, engines, and upper stage all had schedule reserves—time allocated to specific activities to address delays or unforeseen risks— to support a June 2020 launch. The core stage, however, did not have schedule reserves remaining as the program continues to work through development issues. According to the SLS program schedule, core stage development culminates with “green run” testing. For this test, NASA will fuel the completed core stage with liquid hydrogen and liquid oxygen and fire the integrated four main engines for about 500 seconds. The green run test carries risks because it is the first time that several things are being done beyond just this initial fueling. For example, it is also the first time NASA will fire the four main engines together, test the integrated engine and core stage auxiliary power units in flight-like conditions, and use the SLS software in an integrated flight vehicle. In addition, NASA will conduct the test on the EM-1 flight vehicle hardware, which means the program would have to repair any damage from the test before flight. The program has no schedule margin between the end of core stage production and the start of the green run test, and is tracking risks that may delay the test schedule. For example, as the NASA Office of Inspector General (OIG) found in its October 2018 report, the Stage Controller—the core stage’s command and control hardware and software needed to conduct the green run test—is 18 months behind schedule and may slip further. Any additional delays with the development of the core stage and stage controller will further delay the start of the green run test. In addition, the SLS program has no schedule margin between the green run test and delivery of the core stage to Kennedy Space Center for integration to address any issues that may arise during testing. In November 2018, senior NASA officials stated that they have accounted for the potential of continued core stage development delays—along with risks to the Orion and EGS programs—and stated that there are an additional 6 to 12 months of risk to the EM-1 launch date. We found that a delay of this length would push the launch date for EM-1 out as far as June 2021 should all of the risks be realized. This would represent a 31- month delay from the original schedule baseline. Further, these 6 to 12 months of schedule risk do not include the effects, if any, of the federal government shutdown that occurred in December 2018 and January 2019. Figure 3 below compares schedules of key events for the core stage shortly after NASA established the program baseline in August 2014, the December 2017 replan, and the program’s schedule as of November 2018. Officials from the SLS program and Boeing, the contractor responsible for building the core stage, indicated that an issue driving core stage delays was underestimation of the complexity of manufacturing and assembling the core stage engine section—where the four RS-25 engines are mated to the core stage—and those activities have taken far longer than expected. For example, around the time of the December 2017 replan, the SLS program schedule indicated that it would take 4 months to complete the remaining work. By late 2018, the estimate for the same work had increased to 11 months. Part of that delay included time required to resolve residue and debris discovered in the fuel lines, which was present because Boeing had not verified the processes that its vendors were using to clean the fuel lines. Further, installation of the fuel lines overlapped with other work in the engine section, making work in the limited space more difficult and complex than it otherwise would have been. NASA officials indicated that there have been additional issues behind core stage delays, including the following: Boeing underestimated the staffing levels required to build the core stage in the time available. According to a NASA official, as core stage production began, Boeing was focused on minimizing the number of technicians, in part to keep costs low, and hired about 100 technicians. The official stated that Boeing now has about 250 technicians on staff in order to address ongoing delays, however, because a number of the additional staff came from non-spaceflight projects, some time was lost getting those staff up to speed on SLS. In addition, the official noted that technicians were spending time performing work away from the vehicle, such as collecting tools and parts for the work they were completing. According to the official, Boeing has since hired additional support staff to perform off-vehicle tasks such as pre-packaging tools in order to allow technicians to spend their time working on the vehicle. The build plans for the core stage were not adequately mature when the contractor began work on the hardware itself, which led to additional delays. For example, according to NASA officials, they expected the work instructions—detailed directions on how the vehicle should be built—to be largely complete by the program’s critical design review, which precedes the production decision. In this case, however, the build plans were not complete by the start of production. Officials stated that the lack of build plans slowed progress, as technicians can only perform work that they have instructions to carry out. In addition, the time to perform some work activities needed to build the designed vehicle was not included in the schedule. For example, more than 900 engine section brackets that were in the design were not on the schedule and, according to NASA officials, Boeing had to install the brackets later, adding complexity to the work schedule. Boeing officials provided three additional perspectives regarding the delays. Boeing officials explained that they did not anticipate any changes from NASA for the loads—impacts and stresses of mass, pressure, temperature, and vibration that the vehicle will experience—following the program’s critical design review, but instead NASA provided three significant updates to those loads. In some cases, the changes were significant enough that they invalidated legacy systems Boeing had planned to use, which required rework. However, SLS program officials stated that they continued to update loads data as the environments anticipated during launch became clearer. Boeing officials also stated that they alerted NASA in September 2014 that a decision to decrease funding in fiscal year 2015 would require the contractor to delay the core stage delivery date. In October 2018, however, the NASA OIG reported that while Boeing anticipated receiving $150 million less than planned in fiscal year 2015, the company received only $53 million less; that a funding increase was received in fiscal year 2016; and that the value of Boeing’s contract increased by nearly $1 billion in May 2016. Finally, Boeing officials stated that it has been challenging to execute NASA’s development approach that called for the first set of hardware built to be used for the initial launch. Boeing officials stated that they are more used to an approach in which they use the first hardware built to qualify the design and that hardware is never flown. The challenge with the current approach, according to Boeing officials, is that all the learning associated with a first build is occurring on the flight unit, which requires extra scrutiny and slows down the process. SLS program officials stated that this approach has been part of the development plan since the initial contract with Boeing was signed. One area in which the program has benefited from the core stage delay is that development of SLS test and flight software, which has been a schedule concern for the program, now has additional time to complete development. Delays to date have been due to late hardware model deliveries and requirements changes according to program officials. The SLS program completed the qualification test—a verification that the software meets documented requirements—for the green run software in March 2018. Program officials stated that the verified test software release will be complete by April 2019, and the EM-1 flight software release will be complete by October 2019. The earlier they are able to complete the software before launch, the more time they will have to complete testing, fix any defects they find, and work with EGS to integrate with the ground software. Measuring to a June 2020 launch date, flight software development has about 6 months of additional time to address issues should they arise. However, the program has a number of test cycles remaining and the program continues to assess a risk regarding the potential impact that late requirements changes could have on software completion. SLS Program Has Shifted Some Costs to Future Missions, Resulting in an Underreporting of Cost Growth for EM-1 The SLS program has been underreporting its development cost growth since the December 2017 replan because of a decision to shift some costs to future missions while not adjusting the baseline downward to reflect this shift. The SLS development cost baseline established in August 2014 for EM-1 includes cost estimates for the main vehicle elements—stages, liquid engines, boosters—and other areas. According to program officials, because of the December 2017 replan process, NASA decided that costs included as part of the SLS EM-1 baseline cost estimate would be more appropriately accounted for as costs for future flights. Thus, NASA decided not to include those costs, approximately $782 million, as part of the revised SLS EM-1 cost estimate. However, NASA did not lower the $7 billion SLS development cost baseline to account for this significant change in assumptions and shifting of costs to future flights, and NASA officials told us that they were not sure what the benefit to NASA would be in adjusting the baseline. This decision presents challenges in accurately reporting SLS cost growth over time. NASA’s decision not to adjust the cost baseline downward to reflect the reduced mission scope obscures cost growth for EM-1. NASA’s cost estimate as of fourth quarter fiscal year 2018 for the SLS program indicated development cost growth had increased by $1 billion, or 14.7 percent. However, our analysis shows that development cost growth actually increased by $1.8 billion or 29.0 percent, when the development baseline is lowered to account for the reduced mission scope. Essentially, NASA is holding the baseline costs steady, while reducing the scope of work included in current cost estimates (see figure 4). NASA’s current approach for reporting cost growth misrepresents the cost performance of the program and thus undermines the usefulness of a baseline as an oversight tool. NASA’s space flight program and project management requirements state that the agency baseline commitment for a program is the basis for the agency’s commitment to the Office of Management and Budget (OMB) and the Congress based on program requirements, cost, schedule, technical content, and an agreed-to joint cost and schedule confidence level. Removing effort that amounts to more than a tenth of a program’s development cost baseline is a change in the commitment to OMB and the Congress and results in a baseline that does not reflect actual effort. Further, the baseline is a key tool against which to measure the cost and schedule performance of a program. A program must be rebaselined and reauthorized by the Congress if the Administrator determines that development costs will increase by more than 30 percent. Accounting for shifted costs, our analysis indicates that NASA has reached 29.0 percent development cost growth for the SLS program. In addition, as we previously reported in May 2014, NASA does not have a cost and schedule baseline for SLS beyond the first flight. As a result, NASA cannot monitor or track costs shifted beyond EM-1 against a baseline. We recommended that NASA establish cost and schedule baselines that address the life cycle of each SLS increment, as well as for any evolved Orion or ground systems capability. NASA partially concurred with the recommendation, but has not taken any action to date. By not adjusting the SLS baseline to account for the reduced scope, NASA will continue to report costs against an inflated baseline, hence underreporting the extent of cost growth. NASA’s Associate Administrator and Chief Financial Officer stated that they understood our rationale for removing these costs from the EM-1 baseline and agreed that not doing so could result in underreporting of cost growth. Further, the Associate Administrator told us that the agency will be relooking at the SLS program’s schedule, baseline, and calculation of cost growth. Orion: Challenges Contribute to Additional Delay for First Mission and Program Cost Estimate Not Complete Orion Is Not on Schedule to Meet June 2020 Replan Schedule for First Mission The Orion program is not on schedule to meet the June 2020 launch date for the first mission due to delays with the European Service Module and ongoing component issues with the avionics systems for the crew module, including issues discovered during testing. European Service Module (ESM). Through a barter agreement, the European Space Agency developed and produced the ESM, which provides propulsion, air, water, and power to the crew module while in space. The European Space Agency delivered the ESM to NASA in November 2018, following several delays with its development. According to program officials, the most recent set of delays prior to delivery were due to issues and failures during ESM propulsion system testing as well as the need to redesign power system components. Orion and EGS officials explained that a total of 20 months is required from receipt of the ESM to prepare it for launch. This time frame includes 14 months for the Orion program to finalize testing of each module and complete program-level integration and testing and 6 months for the EGS program to complete integrated test and checkout with SLS and EGS. As a result, the earliest the Orion program could be ready to support a first mission based on the service module schedule alone is July 2020, 20 months after NASA accepted delivery in November 2018. ESD officials told us that the 6 to 12 months of risk that could push EM-1 to June 2021 includes ESM-related delays. These 6 to 12 months of schedule risk do not include the effects, if any, of the federal government shutdown that occurred in December 2018 and January 2019. Figure 5 compares schedules of key events for the Orion program, including delays with the ESM, from shortly after NASA established the program’s baseline in September 2015, the December 2017 replan, and as of November 2018. Crew Module. While the ESM remains the critical path—the path of longest duration through the sequence of activities that determines the earliest completion date—for the Orion program, the crew module is nearly the critical path due in part to component failures within the avionics systems during testing. Figure 6 is a picture of a crew module test article. In May 2018, we reported that the Orion program was addressing component issues in its avionics systems after they failed during vibration testing. For example, components throughout the crew and service module relied on computer cards used to regulate power. When those cards cracked during testing, the program needed to redesign the cards, retest them, and reinstall them for system tests. Since then, additional avionics failures have surfaced. In one instance, one of the vehicle’s global positioning system receivers failed to power up. In another, a part failed on one of the inertial measurement units, which provide navigation information like vehicle rotation and acceleration. In March 2019, program officials told us that they have addressed these issues in the avionics systems and all flight hardware is installed. Testing. The ability for Orion, SLS, and EGS to complete testing in the integrated test laboratory facility—where software and hardware or hardware simulators are tested together—remains an ongoing risk for both the first mission and then the timing of the second mission. The lab has limited time and test resources to complete the testing necessary for EM-1, and NASA officials indicated that at times it has more demand than it can support. In addition, some testing is taking longer than planned, delaying later tests. The risk associated with these delays is that the later the program discovers an issue, the less time there is to address the issue prior to launch. At the same time that the Orion program is completing EM-1 work in the integrated test lab, the program will also need to modify the lab’s configuration in order to support EM-2 efforts because of hardware and software differences between missions. The schedule currently includes periods of time during EM-1 testing where EM-1 efforts will be shut down in order to work on lab modifications for EM-2. Although program officials indicated that test lab delays for EM-1 will not adversely affect lab efforts for EM-2, resources directed to EM-2 will mean less resources will be available during those times to support EM-1. Cost Estimate Is Incomplete The Orion program has reported development cost growth but is not measuring that growth using a complete cost estimate. In summer 2018, the Orion program reported development cost growth of $379 million, or 5.6 percent above its $6.768 billion development cost estimate. The program explained that the major drivers of this cost growth were the slip of the EM-1 launch date, which reflected delays in the delivery of the service module; Orion contractor underperformance; and NASA-directed scope increase. However, during our review, Orion program officials stated that this cost estimate assumes an EM-2 launch date of September 2022, which is 7 months earlier than the program’s agency baseline commitment date of April 2023 that forms the basis for commitments between NASA, the Congress, and OMB. As a result, NASA’s current cost estimate for the Orion program is not complete because it does not account for costs that NASA would incur between September 2022 and April 2023. Subsequently, program officials told us that its cost projections fund one of those seven months. See figure 7. NASA officials originally told us that they do not have an Orion cost estimate through the EM-2 agency baseline commitment launch date of April 2023 because they plan to launch by September 2022, if not earlier. According to scheduling best practices, performance is measured against the program’s baseline even if a program is working to an earlier date. By not estimating costs through its baseline launch date, the Orion program is limiting the NASA Associate Administrator’s insight into how the program is performing against the baseline. According to federal law, the Administrator must be immediately notified any time that a designated official has reasonable cause to believe that either the program’s development cost is likely to exceed the estimate in the agency baseline commitment by 15 percent or more or a program milestone will slip 6 months or more beyond its schedule agency baseline commitment date. If the Administrator confirms the cost growth or schedule delay exceeds the given threshold, the Administrator must submit a report to the Committee on Science and Technology of the House of Representatives and the Committee on Commerce, Science, and Transportation of the Senate. Given that the program is already reporting cost growth to a date earlier than its baseline schedule, updating the cost estimate relative to the EM-2 baseline schedule would provide NASA management and Congress with more complete cost data and increased awareness of whether additional oversight is merited. EGS: Delays and Development Challenges Have Eroded the Schedule, but Program Remains within Replanned Schedule and Costs Since the December 2017 replan, the EGS program has had to address several technical challenges that consumed schedule reserves. Nevertheless, officials expect to have EGS facilities and software ready by June 2020, the planned launch date. The program has completed many of its projects, including the renovation of the Vehicle Assembly Building and the launch pad. Since the replan, however, the project has had to address technical challenges with the Mobile Launcher. Figure 8 below compares the EGS schedule—including timeframes for the Mobile Launcher and software completion—shortly after NASA established the program’s baseline in September 2014, the December 2017 replan, and as of November 2018. It also shows the potential launch window reflecting the 6-12 months of risk NASA is tracking that could push EM-1 to June 2021. Mobile Launcher. The Mobile Launcher schedule deteriorated since the December 2017 replan due to problems with finalizing construction work prior to moving it to the Vehicle Assembly Building. Moving the Mobile Launcher into the Vehicle Assembly Building was intended to allow the program to begin multi-element verification and validation, a process that checks that the various launch and processing systems at Kennedy Space Center meet requirements and specifications and can operate together to fulfill their intended purpose. Challenges the program experienced with the Mobile Launcher included having to add structural supports after determining that the design was not adequate to carry the load of the SLS vehicle and fuel. In addition, program officials stated that construction work overall did not progress to the point desired to move the Mobile Launcher to the Vehicle Assembly Building. As a result, the program did not move the Mobile Launcher into the Vehicle Assembly Building until September 2018, 5 months later than in the schedule established after the December 2017 replan. Moving forward, the program has to complete the multi-element verification and validation process for the Mobile Launcher and Vehicle Assembly Building. We have reported on a number of issues related to the EGS program’s management of the Mobile Launcher, as well as the now-completed Vehicle Assembly Building project. For example, in 2016, we found that the program did not mature requirements and designs for the Mobile Launcher before beginning construction. In addition, the EGS program completed all major structural changes to the Mobile Launcher prior to completing the design and installation of the ground support equipment and the nine umbilicals that connect the Mobile Launcher directly to the SLS and Orion. There have also been ground support equipment and umbilical design changes both during and after the Mobile Launcher’s design phase because of vehicle requirement changes from SLS and Orion. Officials indicated this approach was problematic because the concurrency increased program risk. Further, according to officials, the decision to have separate contracts for design and construction exacerbated these challenges. Officials indicated that this contracting strategy meant that design changes required multiple levels of review and approval from NASA and each of the program’s contractors, which in turn led to numerous contract modifications. According to EGS officials, the program plans to incorporate lessons learned from developing the first Mobile Launcher into the acquisition approach for a second Mobile Launcher that NASA is building to allow for future configurations of the SLS vehicle. Specific lessons officials plan to carry forward to the second Mobile Launcher include: implementing an integrated design process, including establishing a process to better handle requirement changes during design and construction; developing and maintaining a three-dimensional (3D) model to facilitate integrated design; and enabling builder involvement during the design process to avoid pitfalls during construction. However, these lessons learned do not address metrics to assess design stability before starting construction. Our work on acquisition best practices show that good processes that mature designs early in development and ensure that the design meets requirements can position a program for future success and lead to more predictable cost and schedule outcomes. Traditionally, we have used the number of releasable engineering drawings as a metric to assess design stability. Specifically, our work has found that achieving design stability at the product critical design review, usually held midway through product development, is a best practice. Completion of at least 90 percent of engineering drawings at this point provides tangible evidence that the product’s design is stable. We have also found that the U.S. Navy and the commercial shipbuilding industry use 3D product models as tools to document design stability. We found that there are aspects of shipbuilding that are analogous to building a Mobile Launcher in that both involve designing and building a large metal structure and installing multiple complex integrated systems to support complex functions such as launching spacecraft, or in the case of the Navy, launching aircraft and/or missile systems. NASA officials agreed that developing a Mobile Launcher is analogous to shipbuilding. Best practices for commercial shipbuilding indicate that 3D product models documenting 100 percent of the system’s basic and functional designs should be complete before construction begins. Basic design includes fixing the ship steel structure; routing all major distributive systems, including electricity, water, and other utilities; and ensuring the ship will meet the performance specifications. Functional design includes providing further iteration of the basic design, providing information on the exact position of piping and other outfitting in each block, and completing a 3D product model. The combined basic and functional designs in conjunction with the 3D product model provide the shipbuilder a clear understanding of the ship structure as well as how every system is set up and routed throughout the ship. This detailed knowledge allows commercial shipbuilders to design, build, and deliver complex ships such as floating production storage and offloading vessels, which are able to collect, process, and store oil from undersea oil fields, within schedule estimates. The improved design processes the EGS program is pursuing in the development of the second Mobile Launcher, including the development of a 3D model to facilitate integrated design, have the potential to improve program outcomes. Further, achieving design stability before beginning construction would also improve this potential. Software. The program’s two software development efforts represent the EGS critical path, and program officials stated that recent changes have begun to address previous challenges with the software development. For example, officials explained that the program has implemented iterative integration testing and has identified lead engineers for each software development area. The iterative integration testing involves conducting tests on smaller segments of software throughout the development process instead of waiting to conduct testing when a software release is fully complete. According to officials, these efforts allow the program to identify and correct errors prior to completing a full software drop. These changes have also resulted in lower numbers of issues found in some software releases. Further, the 6-month delay to the SLS and Orion programs has provided additional flexibility to EGS’s software development schedule. Finally, with respect to EGS’s performance against its cost baseline, EGS updated its cost estimate as part of the December 2017 replan. The EGS program continues to operate within costs established for the June 2020 launch date, $3.2 billion, but any delays beyond June 2020 will result in additional cost growth. Contractors Received Majority of Award Fees but NASA Experienced Poor Program Outcomes NASA’s award fee plans for the SLS stages and Orion crew spacecraft contracts provide for hundreds of millions of dollars to incentivize contractor performance, but the programs continue to fall behind schedule and incur cost overruns. Our past work shows that when incentive contracts are properly structured, the contractor has profit motive to keep costs low, deliver a product on time, and make decisions that help ensure the quality of the product. Our prior work also shows, however, that incentives are not always effective tools for achieving desired acquisition outcomes. We have found that, in some cases, there are significant disconnects between contractor performance for which the contractor was awarded the majority of award fees possible without achieving desired program results. Additionally, we have found that some agencies did not have methods to evaluate the effectiveness of award fees. The incentive strategies for both the SLS stages and the Orion crew spacecraft contracts include multiple incentives—milestone fees, performance incentive fees, cost incentive fees, and award fees—aimed at incentivizing different aspects of contractor performance. These contracts’ milestone fees, performance incentive fees, and cost incentive fees are generally determined against objective criteria, such as meeting a date and application of predetermined formulas. For example, NASA will pay a milestone fee to Boeing under the SLS contract when it meets a specific program milestone such as transferring the core stage to the government for the green run test. Under this contract, Boeing receives additional milestone fee when it beats a milestone date and reduced fee when it misses a milestone date. Likewise, pre-determined formula-type incentives—such as these contracts’ performance incentive fees and cost incentive fees—are typically determined based on objective criteria, such as meeting technical metrics or predetermined cost targets. Award fees on these types of contracts are generally determined at 6 to 12-month periodic evaluations of the contractor’s performance against criteria outlined in the award fee plan. For example, according to officials, NASA may evaluate the contractor against technical performance and criteria, such as the ability to avoid and predict cost overruns, manage risk, or accomplish small business goals. Upon the completion of a formal review, performance evaluation board officials make recommendations to the fee determination official on the amount of fee to be paid. Figures 9 and 10 provide overviews of the total incentive fee available on the current contracts for the SLS stages contract and the Orion crew spacecraft contract, by type and percentage. Under the terms of the current contracts, Boeing has earned about $271 million in award fee and Lockheed Martin has earned about $294 million in award fee. Since each program held its confirmation review, the point in time when a program established its cost and schedule baselines, NASA has paid the majority of available award fee to both contractors. Specifically, NASA has paid Boeing about 81 percent of available award fee—or about $146 million—and Lockheed Martin about 93 percent—or about $88 million—since their respective program confirmation reviews. During the annual award fee periods, the descriptive ratings both contractors received ranged from good to excellent. In the subjective appraisals supporting these ratings, NASA identified both strengths that indicate areas of good contractor performance and weaknesses that indicate areas of poor contractor performance. Table 5 includes the results of award fee determinations since the respective program confirmations. The numerical score for each evaluation period represents the percentage of fee paid to the contractor from the available fee pool. Examples of strengths and weaknesses NASA identified in the award fee letters include the following: For the Boeing award fee period ending February 2015, NASA identified several strengths, including effective and timely communication, but stated that its subcontractor management for the vertical assembly center was inadequate. In particular, the program discovered during this time that the as-built design of the vertical assembly center tool was not capable of serving its purpose, which is to build core stage hardware. The design issue resulted in several months of schedule delays. NASA also raised concerns about Boeing’s ability to manage to the baseline schedule in a subsequent award period. For the Lockheed Martin award fee period ending April 2017, NASA identified several strengths, including addressing top program development risks such as establishing a robust mitigation plan to address risks related to the heatshield block architecture. At the same time, NASA noted that Lockheed Martin was not able to maintain its schedule for the crew service module and that the contractor’s schedule performance had decreased significantly over the previous year. While both the SLS and Orion contractors have received the majority of available award fee in each award fee period, the programs have not always achieved overall desired outcomes. For example, in its December 2018 award fee letter to Boeing—representing the good assessment for the September 2017 through October 2018 period of performance—the fee determination official noted that the significant schedule delays on this contract have caused NASA to restructure the flight manifest for SLS. As previously discussed, within 1 year of announcing a delay for the first mission, senior NASA officials acknowledged that the SLS and Orion programs will not meet the new EM-1 schedule of December 2019, and the 6 months of schedule reserve available to extend the date to at least June 2020 has been consumed. In addition, the officials identified 6 to12 months of risk to that date, which could increase the delay up to 31 months. These 6 to 12 months of schedule risk do not include the effects, if any, of the federal government shutdown that occurred in December 2018 and January 2019 due to a lapse in appropriations for fiscal year 2019. Both the contractors and government bear responsibilities for these delays. We have previously found that NASA has made programmatic decisions—including establishing low cost and schedule reserves, managing to aggressive schedules, and not following best practices for earned value management—that have compounded technical challenges that are expected for inherently complex and difficult large-scale acquisitions. Further, we previously reported that NASA did not follow best practices for establishing cost and schedule baselines for these programs nor update cost and schedule analyses to reflect new risks. As a result, NASA overpromised what it could deliver from a cost and schedule perspective. At the same time, both contractors have had challenges that contributed to past delays. For example, in 2015, Boeing was unable to manufacture an intertank panel—which resides between the liquid oxygen and liquid hydrogen tanks—without significant cracking. At the time, NASA estimated that resolving this issue could result in a 6-month slip to the production schedule. Further, as previously discussed, NASA discovered during installation that fuel lines used in the engine section were contaminated with residue and other debris. According to a program official, Boeing had not verified the processes that its vendors were using to clean the fuel lines, resulting in about 2 months’ delay to resolve residue and debris issues. SLS officials indicated that the engine section has a very complex design with many parts in a relatively small, cramped area, so any time problems are found with parts that have already been installed, removing, repairing or replacing them often requires that other parts be removed. Furthermore, as some of the tubing sections had already been installed, resolving this issue, including inspecting, shipping, and cleaning the tubing, affected the overall program schedule. In addition, NASA determined in 2017 that Lockheed Martin would not meet the delivery date for the crew module—even if the European Service Module were on schedule—when numerous problems including design issues, damage during testing, and manufacturing process changes resulted in major schedule impacts to the program. Lockheed Martin also had a number of issues with subcontractor-supplied avionics system components failing during testing that have required time to address. NASA has highlighted concerns over Lockheed Martin’s ability to manage subcontractors in award fee evaluation periods from 2016 to 2018, and the resulting significant cost, schedule, and technical risk impacts to the program. In an attempt to resolve these issues and to improve subcontractor oversight moving forward, Lockheed Martin officials told us that they have placed staff in the subcontractor facilities. Because of these cost increases and delays, the agency plans to renegotiate the Boeing contract for SLS. NASA officials stated that Boeing expects its costs to exceed the cost-reimbursement contract’s not- to-exceed estimated total cost, which will lead to contract renegotiation. Consequently, the contractor has been executing work under an undefinitized contract action since September 2018. Contract actions such as these authorize contractors to begin work before reaching a final agreement with the government on contract terms and conditions. Orion program officials stated that NASA is modifying the cost and period of performance aspects of its contract with Lockheed Martin for Orion development and negotiating a new contract with Lockheed Martin for Orion operations and production. Officials told us the following: NASA is modifying the Orion development contract with Lockheed Martin because the contractor will exceed the cost reimbursement contract’s not-to-exceed estimated total cost. Orion program officials indicated that poor performance on the part of the contractor resulted in the contractor exceeding the costs allowed under the contract without completing the full scope of work. Consequently, NASA is modifying the contract to allow increased costs. Orion officials indicated that since the cost growth is contractor caused, the contractor will not have the ability to earn any fees on this increased cost. NASA is also modifying the Orion development contract to extend the contract period of performance. The current contract’s period of performance ends in December 2020, which is earlier than NASA’s planned EM-2 launch date of June 2022. Orion program officials stated that this extension is largely driven by delays in receipt of the European Service Module. According to officials, NASA is negotiating the terms of the Orion production and operations contract with Lockheed Martin. This contract is expected to support future production of the Orion spacecraft from Exploration Mission-3 potentially through 2029. In addition to production, this effort will include sustaining engineering and flight operations support, with limited development to allow mission kits to be built to specifications as mission objective are defined. Orion program officials indicated that NASA plans to eventually transition the contract to a fixed-price type contract for production, but that the development of mission kits will remain under a cost-reimbursement type contract with some type of incentive fee. In November 2018, senior leaders within the ESD organization told us that it was not clear whether NASA would renegotiate how incentive fees are distributed among milestone incentive fee, or cost incentive fee, and award fee as part of the upcoming Boeing contract renegotiations. NASA, however, has made these types of changes in the past. For instance, the Orion program redistributed fees in 2014 to include an incentive fee component when the contract transitioned from the Constellation program to the Orion program. The Federal Acquisition Regulation and NASA contracting guidance indicate that award fee is appropriate when the work to be performed is such that it is neither feasible nor effective to devise predetermined objective incentive targets applicable to cost, schedule, and technical performance. However, now that the SLS and Orion programs are further into the acquisition life cycle, the programs are at the point in development wherein it may be possible to determine more objective targets for cost, schedule, and technical performance, especially for the first mission. Further, a principle of federal internal controls is that management should design control activities to achieve objectives and respond to risks. This includes management conducting reviews to compare actual performance to planned or expected results, and taking corrective actions to achieve objectives. Without reevaluating its strategy for incentivizing contractors, NASA will miss an opportunity to consider whether changes to the incentive structure could better achieve expected results, such as motivating the contractor to meet upcoming milestone events within cost and schedule targets. Conclusions NASA’s SLS, Orion, and EGS programs are a multi-billion dollar effort to transport humans beyond low-Earth orbit, but the agency has been unable to achieve agreed-to cost and schedule performance. NASA acknowledges that future delays to the June 2020 launch date are likely, but the agency’s approach in estimating cost growth for the SLS and Orion programs is misleading. And it does not provide decision makers, including the Administrator, complete cost data with which to assess whether Congress needs to be notified of a cost increase, pursuant to law. By not using a similar set of assumptions regarding what costs are included in the SLS baseline and updated SLS cost estimates, NASA is underreporting the magnitude of the program’s cost growth. Similarly, NASA is underreporting the Orion program’s cost performance by measuring cost growth to an earlier-than-agreed-to schedule date. As a result, Congress and the public continue to accept further delays to the launch of the first mission without a clear understanding of the costs associated with those delays. Further, NASA is now turning its attention to new projects to support future missions, including building a second Mobile Launcher. Ensuring design stability before construction start would better position NASA to improve its acquisition outcomes for this next Mobile Launcher. Finally, contractor performance to date has not produced desirable program cost and schedule outcomes. Ongoing and planned contract negotiations present an opportunity to restructure the government’s approach to incentives. Such steps may better position the agency to obtain better outcomes going forward. Recommendations for Executive Action We are making the following 4 recommendations to NASA: We recommend the NASA Administrator ensure that the NASA Associate Administrator for Human Exploration and Operations direct the SLS program to calculate its development cost growth using a baseline that is appropriately adjusted for scope and costs NASA has determined are not associated with the first flight, and determine if the development cost growth has increased by 30 percent or more. (Recommendation 1) We recommend the NASA Administrator ensure that the NASA Associate Administrator for Human Exploration and Operations direct the Orion program to update its cost estimate to reflect its committed EM-2 baseline date of April 2023. (Recommendation 2) We recommend the NASA Administrator ensure that the NASA Associate Administrator for Human Exploration and Operations direct the EGS program to demonstrate design maturity by completing 3D product modeling of the basic and functional design of the second Mobile Launcher prior to construction start. (Recommendation 3) We recommend the NASA Administrator ensure that the NASA Associate Administrator for Human Exploration and Operations direct the SLS and Orion programs to reevaluate their strategies for incentivizing contractors and determine whether they could more effectively incentivize contractors to achieve the outcomes intended as part of ongoing and planned contract negotiations. (Recommendation 4) Agency Comments and Our Evaluation NASA provided written comments on a draft of this report. These comments, and our assessment of them, are included in appendix II. NASA also provided technical comments, which were incorporated as appropriate. In responding to a draft of this report, NASA concurred with three recommendations and partially concurred with a fourth recommendation, and identified actions that they plan to take. NASA partially concurred with our recommendation to direct the Orion program to update its cost estimate to reflect its committed EM-2 baseline date of April 2023. In its response, NASA stated providing the estimate to the forecasted launch date—September 2022—rather than to the committed baseline date of April 2023 is the most appropriate approach. Further, NASA stated that any additional slips to the program involve considerable uncertainty associated with “unknown-unknowns” which are, by their very definition, impossible to predict or forecast and that attempting to forecast these at this point is neither practical nor useful to help manage the program. If the schedule projections go beyond September 2022, NASA stated that the Orion program will follow standard Agency processes and update its cost estimate to reflect the updated schedule projections. NASA established Orion’s EM-2 launch date of April 2023 as part of the agency’s program confirmation process in 2015. According to federal law, NASA is required to track and report progress relative to the cost and schedule baselines established at the program’s confirmation review. While programs often pursue goals trying to beat these dates and/or cost estimates, the primary purpose of a cost and schedule baseline is to provide a consistent basis for measuring program progress over time. By developing cost estimates only to the program’s goals and not relative to the established baseline, the Orion program is not providing the Agency or the Congress the means of measuring progress relative to the baseline. We agree that it is difficult to forecast the potential impacts of unexpected problems. NASA guidance, however, provides instructions to programs on the percentage/relative level of cost reserves that should be maintained to deal with potential unknown-unknowns that are likely to come up late in development. We continue to believe that NASA should fully implement this recommendation. We are sending copies of this report to the NASA Administrator and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology To assess the performance of the human space exploration programs, including any technical challenges, relative to their cost and schedule commitments, we obtained and analyzed cost and schedule estimates for the Space Launch System (SLS), Orion Multi-Purpose Crew Vehicle (Orion), and Exploration Ground Systems (EGS) programs through November 2018. We then compared these estimates against program baselines to determine cost growth and schedule delays. We also interviewed SLS program officials and reviewed cost data to determine how the program phases costs for future flights outside the current baseline. We then analyzed the SLS program’s current cost estimate to determine how the scope of the current estimate had changed relative to the scope of the SLS baseline cost estimate. Moreover, we obtained and reviewed quarterly reports and the programs’ risk registers, which list the top program risks and their potential cost and schedule impacts, including mitigation efforts to-date. We then discussed risks with program officials. We also compared program schedules across three points in time— schedules from when NASA first established baselines for each program, schedules established for each program following the replan in December 2017, and schedules as of November 2018—to assess whether program components and software were progressing as expected Furthermore, for the EGS program, we reviewed program-level lessons learned regarding the acquisition of the Mobile Launcher against acquisition best practices to determine the extent to which the program plans to incorporate these best practices as part of its acquisition planning for the second Mobile Launcher. To determine the extent to which NASA’s use of contract award fees are achieving desired outcomes, we analyzed contract modifications, award fee plans, and fee determination records for the Orion crew spacecraft and SLS stages—or stages—contracts. We selected these contracts because they represent the largest development efforts for each program. We analyzed contract documentation to determine the amount of award fee available on these contracts compared to other incentives, such as milestone incentives, and calculated fees paid to date. Specifically, for award fee on both contracts, we reviewed fee determination records for evaluation periods after the SLS program’s confirmation review in 2014 and the Orion program’s confirmation review in 2015 to determine fees paid, numeric and descriptive ratings awarded for each period and contractor strengths and weaknesses identified by the program. Moreover, we reviewed award fee documentation to identify broader program challenges and compared fee determination results to overall program outcomes since program confirmation. For the Orion contract, the scope of our incentive fee analysis included the full scope of incentive fees available for developing and manufacturing the Orion spacecraft from the beginning of the contract. For the SLS contract the scope of our incentive fee analysis included the incentive fees available for 1) contract line item number 9 of the contract which includes the full scope of stages work supporting SLS’s EM-1 effort, and 2) contract line item number 12 indefinite-delivery, indefinite-quantity support task activities for contract line item number 9. We performed our work at Johnson Space Center in Houston, Texas; the Boeing Company in Huntsville, Alabama; Marshall Space Flight Center in Huntsville, Alabama; Kennedy Space Center in Kennedy Space Center, Florida; Lockheed Martin Space Systems Company in Houston, Texas; and NASA headquarters in Washington, DC. We based our assessment on data collected prior to the federal government shutdown that occurred in December 2018 and January 2019 due to a lapse in appropriations for fiscal year 2019. This assessment does not reflect the effect, if any, of the shutdown on the programs’ costs and schedules or a March 2019 announcement that NASA is studying how to accelerate the SLS schedule. We assessed the reliability of program data we used to support this engagement using GAO reliability standards as appropriate, including reviewing related documentation, interviewing knowledgeable agency officials, and performing selected testing of data. We determined the data was sufficiently reliable for the purposes of this engagement. We conducted this performance audit from March 2018 to June 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the National Aeronautics and Space Administration GAO Comments 1. This report acknowledges the complexity of NASA’s deep space exploration systems. The introduction section of this report acknowledges that NASA is developing systems planned to transport humans beyond low-Earth orbit, including the Moon and eventually Mars, and that each of these programs represents a large, complex technical and programmatic endeavor. The introduction also notes that these programs are in the integration and test phase of development, which our prior work has shown often reveals unforeseen challenges leading to cost growth and schedule delays. 2. Senior NASA officials told us that the revised EM-1 launch date of December 2019 is unachievable and the June 2020 launch date (which takes into account schedule reserves) is unlikely. These officials then estimated that there are 6 to 12 months of schedule risk associated with the June 2020 date. It would be misleading for us to continue to report the June 2020 launch date when we were told there was substantive risk to that date. Without a new approved schedule, Figure 3, Figure 5, and Figure 8 all present a notional launch window including the acknowledged schedule risks. We then used the information NASA provided us to report that the first launch may occur as late as June 2021, if all risks are realized. Further, this substantial delay to the first mission was acknowledged by senior officials less than one year after NASA announced up to a 19 month delay. We maintain that continued underperformance contributed to these additional schedule delays and associated cost increases. For example, for SLS, NASA discovered during installation that fuel lines used in the engine section were contaminated with residue and other debris. According to a program official, Boeing had not verified the processes that its vendors were using to clean the fuel lines, resulting in about 2 months’ delay to resolve residue and debris issues. For the Orion program, NASA determined in 2017 that Lockheed Martin would not meet the delivery date for the crew module—even if the European Service Module were on schedule—when numerous problems including design issues, damage during testing, and manufacturing process changes resulted in major schedule impacts to the program. As a result, we also maintain that these delays and cost growth reinforce concerns over the management of the programs. In addition to the underperformance, NASA’s management decisions on how to report cost growth is not fully transparent and, in particular, obscures the difficulties the SLS program has faced controlling costs. 3. We agree that that these are long-term, “multi-decadal” programs and that content is subject to change. As a result, we maintain that arbitrarily focusing on a single mission and not looking at long- term costs may have negative impacts to this human spaceflight system. We previously reported in May 2014, that NASA does not have a cost and schedule baseline for SLS beyond the first flight. As a result, NASA cannot monitor or track costs shifted beyond EM-1 against a baseline. We recommended that NASA establish cost and schedule baselines that address the life cycle of each SLS increment, as well as for any evolved Orion or ground systems capability. NASA partially concurred with the recommendation, but has not taken any action to date. Until action is taken to do so, as noted above, NASA’s decision to shift some SLS costs to future missions while not adjusting the baseline downward not only underestimates cost growth for the first mission, but also results in there being no mechanism to track these costs that NASA shifted to future missions. 4. Through the course of this review, NASA was transparent in its discussions with us of how it calculated costs for each of the programs. The findings of this report are not meant to convey that NASA is withholding information, but rather, that decisions NASA has made about how to calculate costs do not provide sufficient transparency into cost growth or cost estimates. Further, we have previously reported that without transparency into costs for future flights, NASA does not have the data to assess long-term affordability and Congress cannot make informed budgetary decisions. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Cristina T. Chaplain, (202) 512-4841 or chaplainc@gao.gov. Staff Acknowledgments In addition to the contact named above, Molly Traci, Assistant Director; Andrea Bivens; Sylvia Schatz; Ryan Stott; Tanya Waller; John Warren; Alyssa Weir; and Robin Wilson made significant contributions to this report.
Why GAO Did This Study NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit. All three programs (SLS, Orion, and supporting ground systems) are working toward a launch readiness date of June 2020 for the first mission. The House Committee on Appropriations included a provision in its 2017 report for GAO to continue to review NASA's human space exploration programs. This is the latest in a series of reports addressing the mandate. This report assesses (1) how NASA's human space exploration programs are performing relative to cost and schedule commitments, and (2) the extent to which NASA's use of contract award fees is achieving desired program outcomes. To do this work, GAO examined program cost and schedule reports and contractor data, and interviewed officials. This report does not assess the effect, if any, of the government shutdown that ended in January 2019. What GAO Found Due to continued production and testing challenges, the National Aeronautics and Space Administration's (NASA) three related human spaceflight programs have encountered additional launch delays and cost growth. In November 2018, within one year of announcing an up to 19-month delay for the three programs—the Space Launch System (SLS) vehicle, the Orion spacecraft, and supporting ground systems—NASA senior leaders acknowledged the revised date of June 2020 is unlikely. Any issues uncovered during planned integration and testing may push the launch date as late as June 2021. Moreover, while NASA acknowledges about $1 billion in cost growth for the SLS program, it is understated. This is because NASA shifted some planned SLS scope to future missions but did not reduce the program's cost baseline accordingly. When GAO reduced the baseline to account for the reduced scope, the cost growth is about $1.8 billion. In addition, NASA's updated cost estimate for the Orion program reflects 5.6 percent cost growth. The estimate is not complete, however, as it assumes a launch date that is 7 months earlier than Orion's baseline launch date. If the program does not meet the earlier launch date, costs will increase further. Updating baselines to reflect current mission scope and providing complete cost estimates would provide NASA management and Congress with a more transparent assessment of where NASA is having difficulty controlling costs. NASA paid over $200 million in award fees from 2014-2018 related to contractor performance on the SLS stages and Orion spacecraft contracts. But the programs continue to fall behind schedule and overrun costs. Ongoing contract renegotiations with Boeing for the SLS and Lockheed Martin for the Orion program provide NASA an opportunity to reevaluate its strategy to incentivize contractors to obtain better outcomes. What GAO Recommends GAO is making four recommendations to NASA, including that the SLS program should calculate cost growth based on costs that are currently included in the first mission and the Orion program should update its cost estimate to reflect the schedule agreed to in its baseline. In addition, the SLS and Orion programs should reevaluate their strategy for incentivizing contractors. NASA concurred with three recommendations, and partially concurred with the recommendation related to the Orion program's cost estimate. GAO believes the recommendation remains valid, as discussed in the report.
gao_GAO-20-35
gao_GAO-20-35_0
Background Veterans with SUDs In fiscal year 2018, VHA data show that 518,570 veterans received any treatment (specialty or non-specialty services) from VHA’s health care systems for a diagnosed SUD, a 9.5 percent increase from fiscal year 2016 (see figure 1). Because these data include non-specialty services, the data do not indicate the extent to which the veteran received SUD services. For example, a provider briefly discussing the SUD of a veteran in long-term recovery during a primary care visit would be included in SUD treatment data. VHA data show that the majority of veterans who received any treatment from VHA’s health care systems for a diagnosed SUD had an alcohol use disorder. Veterans received any treatment from VHA for a diagnosed SUD at a higher rate than the general population. Data from the 2017 National Survey on Drug Use and Health indicate that 1.5 percent of individuals aged 18 or older nationwide received any SUD treatment in the past year. In comparison, 8 percent of veterans getting health care provided or purchased by VHA received any treatment for a diagnosed SUD in fiscal year 2017, including individuals who received specialty SUD services as well as individuals who received non-specialty services in, for example, primary care or general mental health clinics. Specialty SUD Services VHA’s health care systems provide specialty SUD services in three settings increasing in intensity (see figure 2): Outpatient services. Individual and group therapy, either in person or via telehealth, among other services. VHA also offers intensive outpatient programs, which provide services for 3 or more hours per day, 3 days a week at a minimum. Residential rehabilitation treatment programs. Medically monitored, high-intensity care in a 24-hour supervised environment specifically dedicated to treating SUDs. These programs may also provide social services for community reintegration and treatment for other medical conditions during a veteran’s stay. Inpatient services. Acute in-hospital care, which may include detoxification services. Medication-Assisted Treatment for Opioid Use Disorder For veterans with opioid use disorder—a subset of SUDs—VHA’s health care systems provide medication-assisted treatment. Medication- assisted treatment combines behavioral therapy and the use of certain medications, including methadone and buprenorphine. Medication- assisted treatment has proven to be clinically effective in reducing the need for inpatient detoxification services for individuals with opioid use disorder, according to SAMHSA. Methadone. This medication suppresses withdrawal symptoms during detoxification. It also controls the craving for opioids in maintenance therapy, which is ongoing therapy meant to prevent relapse and increase treatment retention. Methadone is a controlled substance and, when used to treat opioid use disorder, may generally be administered or dispensed only within a certified opioid treatment program to help prevent diversion. Buprenorphine. This medication eliminates opioid withdrawal symptoms, including drug cravings, and it may do so without producing the euphoria or dangerous side effects of other opioids. It can be used for detoxification and maintenance therapy. Buprenorphine is also a controlled substance, and when used to treat opioid use disorder, may be administered or dispensed within an opioid treatment program, or prescribed or dispensed by a qualifying provider who has received a waiver to do so. Providers who receive this waiver are limited in the number of patients they may treat for opioid use disorder. In addition to medication-assisted treatment, VHA has initiatives aimed at preventing opioid-related overdose deaths. For example, VHA’s Opioid Overdose Education and Naloxone Distribution program includes education and training regarding opioid overdose prevention as well as naloxone distribution. Naloxone is a medication that can reverse opioid overdoses. Care in the Community Veterans may receive services from community providers via local contracts or community care. For local contracts, individual VA medical centers establish contracts with local community providers. For example, a VA medical center may develop a contract with a community residential rehabilitation treatment program provider to set aside a number of beds specifically for veterans. For community care, veterans may be eligible if, for example, VHA does not offer the care or service the veteran requires or VHA cannot provide the care or services consistent with its access standards. In general, community care services must be authorized in advance of when veterans access the care. Prior to June 6, 2019, eligible veterans could receive community care via one of multiple VHA community care programs. In 2018, the VA MISSION Act required VA to implement a permanent community care program that consolidated several community care programs. On June 6, 2019, the consolidated community care program, the Veterans Community Care Program, went into effect. Number of Veterans Receiving, and Expenditures for, VHA Specialty SUD Services Have Remained Unchanged in Recent Years; Community Care SUD Services Have Increased Number of Veterans Receiving Specialty SUD Services in VHA’s Health Care Systems and Related Expenditures Were Relatively Unchanged Between Fiscal Years 2014 and 2018 Among the 518,570 veterans who received SUD services in fiscal year 2018, VHA provided specialty SUD services to 152,482 veterans in fiscal year 2018. This number has increased slightly but remained relatively unchanged since fiscal year 2014, as shown in table 1 below. These veterans received care in VHA’s health care systems—that is, in VA medical centers or in one of the medical centers’ affiliated outpatient clinics and other medical facilities. During the same time period, VHA expenditures for these specialty SUD services increased from $552 million in fiscal year 2014 to $601 million in fiscal year 2018. Total specialty SUD expenditures per capita increased from $3,691 to $3,941 from fiscal years 2014 through 2018. Adjusted for inflation, however, per capita expenditures remained relatively unchanged between fiscal years 2014 and 2018. Most Veterans Received Specialty SUD Services in Outpatient Settings; Medication-Assisted Treatment Has Increased for Opioid Use Disorders in Recent Years Our analysis of VHA data shows that veterans received specialty SUD services from VHA’s health care systems in multiple settings from fiscal years 2014 through 2018, with most veterans receiving these services in outpatient settings. Veterans may receive specialty SUD services across multiple settings within a year. Below, we provide information on utilization and expenditures for specialty SUD services in outpatient and residential treatment programs and for medication-assisted treatment for veterans with opioid use disorder. Specialty Outpatient Settings In fiscal year 2018, nearly all veterans who received specialty SUD services from VHA’s health care systems received this care in outpatient settings at some point during the year. Of those veterans who received outpatient specialty SUD services, 17 percent received intensive outpatient specialty SUD services, with little change from previous years. Expenditures for outpatient specialty SUD services increased from fiscal years 2014 through 2018, as shown in table 2 below. During this time period, outpatient specialty SUD expenditures per capita increased from $2,176 to $2,348. Adjusted for inflation, per capita expenditures grew 1.5 percent between fiscal years 2014 and 2018. In addition, we found little change in the number of full-time employee equivalents that actively provided outpatient specialty SUD services from fiscal years 2015 through 2018. VHA did not provide specialty outpatient wait-time data because, according to VHA officials, the data do not reliably capture veterans’ wait times to receive SUD services in outpatient settings. VHA officials explained that veterans may receive non-specialty SUD services in various outpatient settings, including primary care and general mental health clinics. Therefore, developing a wait-time measure for specialty SUD services would not accurately capture whether veterans are waiting for SUD services not previously provided or services that would continue ongoing treatment begun in a primary care or general mental health clinic. As a result, we did not analyze outpatient wait-time data. In prior work, we have made recommendations to VHA on ways it can improve its outpatient wait-time data (see sidebar). Specialty Residential Rehabilitation Treatment Programs As of fiscal year 2018, VHA had residential rehabilitation treatment programs available for veterans with complex and long-term mental health needs at 113 facilities, and 67 of these programs were dedicated to SUD treatment. The number of residential rehabilitation treatment programs dedicated to SUD treatment increased from fiscal years 2014 through 2018, as did the number of beds available. Figure 3 shows the location of all 67 residential rehabilitation treatment programs specifically dedicated to SUDs with the corresponding number of beds in fiscal year 2018. See appendix III for more information on residential rehabilitation treatment programs dedicated to SUD treatment. The number of veterans participating in VHA’s specialty SUD residential rehabilitation treatment programs (that is, those dedicated to SUD treatment) remained relatively stable from fiscal years 2014 through 2018, as shown in table 3. Of the veterans who received specialty SUD services in fiscal year 2018, approximately 10 percent participated in one of VHA’s 67 residential rehabilitation treatment programs dedicated to SUD treatment, similar to previous years. Meanwhile, expenditures for VHA’s residential rehabilitation treatment programs dedicated to SUD decreased from fiscal years 2014 through 2016, but increased in fiscal years 2017 and 2018. Similarly, specialty SUD residential expenditures per capita decreased from $15,386 in fiscal year 2014 to $12,526 in fiscal year 2016 and increased again to $16,031 in fiscal year 2018. After adjusting for inflation, specialty SUD residential expenditures per capita in 2018 were about 2 percent less than what they were in 2014. From fiscal years 2014 to 2018, veterans’ average length of stay for VHA’s specialty residential rehabilitation treatment programs specifically dedicated to SUD generally decreased, while wait times varied across programs. Across VHA’s residential rehabilitation treatment programs dedicated to SUD treatment, veterans’ average length of stay generally decreased from fiscal years 2014 to 2018, from nearly 40 days to nearly 36 days. VHA officials said that average length of stay may have decreased as a result of multiple factors, such as programs with longer lengths of stay adjusting their treatment approaches. The median wait times to enter residential rehabilitation treatment programs dedicated to SUD treatment varied considerably, ranging from 0 days to 56 days across the programs in fiscal year 2018, although not all residential rehabilitation treatment programs had sufficient—and therefore reliable— data on wait times. Specifically, out of the 67 residential rehabilitation treatment programs dedicated to SUD, VHA officials identified 12 that did not have sufficient wait-time data, which we excluded from our analysis. VHA officials noted that some specialty residential rehabilitation treatment programs do not have sufficient wait-time data because the facilities do not consistently code whether a patient’s visit included a screening for admission to the program. As such, VHA cannot tell when patients were initially screened for admission. In fiscal year 2019, officials implemented changes to address the lack of reliable data from some facilities. However, it is too early to tell if the new changes will address the data reliability issues in wait-time data for residential rehabilitation treatment programs. Medication-Assisted Treatment for Opioid Use Disorder VHA health care systems offer veterans medication-assisted treatment for opioid use disorder in a variety of settings, including outpatient specialty SUD settings and residential rehabilitation treatment programs dedicated to SUD treatment, as well as in non-specialty settings, such as primary care and general mental health clinics. Our analysis of VHA data shows the number and proportion of veterans with an opioid use disorder who received medication-assisted treatment from VHA’s health care systems has risen in recent years, as shown in table 4. In fiscal year 2018, 23,798 veterans received medication-assisted treatment, which was 33.6 percent of veterans diagnosed with an opioid use disorder. Veterans with an opioid use disorder may receive medication-assisted treatment through VHA at a lower rate than individuals who received care through private insurance. According to a study by the Department of Health and Human Services, 50.6 percent of individuals diagnosed with an opioid use disorder and enrolled in private insurance received medication-assisted treatment in 2014 to 2015. Some veterans may also have private insurance and may have received their medication-assisted treatment through that private insurance. In fiscal year 2018, 9,132 (38 percent) of the veterans who received medication-assisted treatment received their care at one of VHA’s 33 opioid treatment programs, which is the only setting where methadone can be administered to treat opioid use disorder. Expenditures for these opioid treatment programs increased from $35.9 million in fiscal year 2014 to $39.1 million in fiscal year 2018. In fiscal year 2018, VHA had 2,036 providers with a waiver to prescribe buprenorphine, a 17.6 percent increase from fiscal year 2017. According to VHA officials, VHA has encouraged its providers—including those who are not specialists in treating SUDs, such as primary care providers—to obtain the waiver required to prescribe buprenorphine to treat opioid use disorder. In fiscal year 2018, there were about 29 VHA providers with a waiver to prescribe buprenorphine for every 1,000 veterans with opioid use disorder, a 14 percent increase from fiscal year 2017. Naloxone Distribution VHA’s naloxone kit distribution increased exponentially from 646 in fiscal year 2014 to 97,531 kits in fiscal year 2018. A total of 204,557 naloxone kits have been distributed through fiscal year 2018. VHA health care systems distributed naloxone kits to VA staff, including VA first responders and VA police officers, and veterans with opioid use disorder. Factors contributing to the increase may include: In 2014, VHA implemented the Opioid Overdose Education and Naloxone Distribution initiative to decrease opioid-related overdose deaths among veterans, with one of its key components focused on encouraging naloxone kit distribution. Since the program’s implementation, all VHA health care systems dispense naloxone kits. The Comprehensive Addiction and Recovery Act of 2016 directed VHA to maximize the availability of naloxone to veterans and to ensure that veterans who are considered at risk for opioid overdose have access to naloxone and training on its proper administration. More Veterans Have Received SUD Services through Community Care in Recent Years; VHA Seeks to Collect Reliable Data on Usage by Community Care Settings Veterans Health Administration (VHA) Community Care Wait Times GAO has a body of work highlighting challenges VHA has with the reliability of its wait-time data. See below for recent reports about this issue. We have highlighted the importance of reliable community care wait-time data in a testimony regarding VHA’s efforts to address our previous recommendations on these issues. See GAO, Veterans Health Care: Opportunities Remain to Improve Appointment Scheduling within VA and through Community Care, GAO-19-687T (Washington, D.C.: July 24, 2019). We have designated our past recommendations related to community care wait-time data as priorities for the agency. See GAO, Priority Open Recommendations: Department of Veterans Affairs, GAO-19- 358SP (Washington, D.C.: Mar. 28, 2019). We have previously made recommendations to VHA to capture the necessary information and improve the reliability of wait-time data for community care. These recommendations remain outstanding as of October 2019. See GAO, Veterans Choice Program: Improvements Needed to Address Access- Related Challenges as VA Plans Consolidation of its Community Care Programs, GAO-18-281 (Washington, D.C.: June 4, 2018). Through its community care programs, VHA purchased SUD services (specialty and non-specialty) for 20,873 veterans in fiscal year 2018, a significant increase since fiscal year 2014 (see table 5). VHA officials noted that veterans can receive community care in addition to, or instead of, care at a VHA facility; therefore, the number of veterans served through community care cannot be combined with the number who received services within VHA to provide an overall number of veterans receiving care. Expenditures for these SUD services purchased by VHA also increased over time, from nearly $6 million in fiscal year 2014 to over $80 million in fiscal year 2018. Between fiscal years 2014 and 2018, on a per capita basis, SUD services purchased by VHA increased from $3,021 to $3,852. Per capita expenditures adjusted for inflation also increased during this time period. These increases coincided with the establishment of the Veterans Choice Program in early fiscal year 2015, which expanded eligibility for community care. Wait-time data for SUD services purchased through community care were not available because of data reliability issues, VHA officials told us. See sidebar for more information on our previous recommendations to VHA regarding community care wait-time data. While VHA is able to report on the overall number of veterans receiving SUD services through community care, data limitations prevent VHA officials from reliably determining whether veterans received this care in residential or outpatient settings. These issues are as follows: Residential rehabilitation treatment programs. VHA uses billing codes on paid claims to track the settings in which veterans receive community care; however, according to agency officials, there is no specific billing code for a residential setting. VHA officials told us that community residential rehabilitation treatment programs may record treatment provided using inpatient or outpatient billing codes—or a combination of the two—in submitting claims to VHA. As a result, VHA is unable to use claims data to reliably identify veterans who received residential rehabilitation treatment through community care. Outpatient settings. Because some residential care data are coded using outpatient billing codes, outpatient data may contain residential services counted as outpatient services. As a result, VHA is unable to reliably identify veterans who received SUD services in community care outpatient settings. Currently, VHA is taking steps to address these coding issues. VHA officials told us they are developing a payment code that will bundle together common residential program services, which will allow VHA to identify veterans receiving residential rehabilitation treatment for SUDs through community care. Officials explained that using this code for residential SUD services will allow VHA to better distinguish between residential and outpatient community care because residential care will no longer need to be identified using outpatient codes. In contrast to its community care programs, VHA does not centrally track SUD services provided via local contracts. Rather, the individual medical centers that established the contracts with local community providers are responsible for tracking and documenting SUD services provided to veterans. In fiscal year 2019, VHA began conducting market assessments, a broader agency initiative to better understand the supply and demand of all services at all VA medical centers, including both what is available within VHA as well as what is available in the local communities. We reviewed one of the data collection instruments the agency is using as a part of this work and found that it should allow VHA to identify, among other things, the number of community residential rehabilitation treatment beds contracted by individual medical centers to serve veterans with SUDs, as well as the number of veterans who received SUD services through local contracts or community care for SUDs. Agency officials said that they expect the market assessments to be completed in 2020. Veterans’ Usage Differed Between Urban and Rural Areas for Some Specialty SUD Services; VHA Is Taking Steps to Address Access Issues in Rural Areas Although overall use of SUD services was similar among veterans in rural and urban areas, VHA data show the utilization rates of some specialty SUD services differed. The literature and agency documents we reviewed and VHA officials consistently cited several issues, such as recruiting SUD providers and accessing necessary prescriptions for SUDs, which affect the use of services by veterans with SUDs in rural areas. According to agency documents and officials, VHA is taking steps to address these issues. Overall Use of SUD Services Was Similar for Veterans in Rural Areas Compared to Urban Areas, but Use of Some Specialty SUD Services Differed in Fiscal Years 2014 through 2018 Overall, veterans’ use of SUD services was similar in rural areas compared to urban areas, but use of some specialty services differed. Our analysis of VHA data shows that across VHA’s 140 health care systems, there was relatively little difference in the overall utilization of SUD services (specialty and non-specialty) in rural and urban areas from fiscal years 2016 through 2018. In fiscal year 2018, for example, 7.5 percent of veterans in rural areas received any SUD services compared with 8.8 percent of veterans in urban areas. However, VHA data also show there were some types of specialty services, such as intensive outpatient specialty services, residential rehabilitation treatment programs, and medication-assisted treatment for opioid use disorder, that rural veterans with SUDs tended to use more or less of than their urban counterparts. Intensive Outpatient Specialty SUD Services Among veterans receiving specialty SUD services across all 140 VHA health care systems, those veterans in rural locations used intensive outpatient specialty SUD services at a slightly higher rate (19 percent) than veterans in urban locations (17 percent) in fiscal year 2018. While veterans’ utilization of these specialty SUD services has decreased in both rural and urban locations in recent years, the decreases have been larger in rural areas. In rural locations, the percentage of veterans using intensive outpatient specialty SUD services decreased from 25 percent in fiscal year 2015 to 19 percent in fiscal year 2018. In comparison, in urban areas, the percentage of veterans using these services decreased from 18 percent to 17 percent during this same time period. Officials from VHA health care systems in three urban locations and two rural locations we spoke with indicated that they offered intensive outpatient specialty SUD services in conjunction with either residential or outpatient services. According to officials from the rural VHA health care system that did not offer this service, the location did not have sufficient staff to provide the additional hours of intensive outpatient specialty SUD treatment each week. Specialty Residential Rehabilitation Treatment Programs Veterans in rural locations using specialty SUD services participated in residential rehabilitation treatment programs dedicated to SUD treatment at a higher rate (17 percent) than veterans using these services in urban locations (10 percent) across all 140 VHA health care systems in fiscal year 2018. From fiscal years 2014 through 2018, there was a slight increase in the percentage of rural veterans using specialty SUD services who participated in residential rehabilitation treatment programs dedicated to SUD treatment, from 13 percent to 17 percent. VHA officials told us rural communities often face difficulties with transportation that may make residential programs more feasible than accessing intensive outpatient specialty SUD services, which are at least 3 days per week, at VHA health care systems. All six of the VHA health care systems we interviewed offered residential rehabilitation treatment programs. VHA reported the agency is currently conducting market assessments that may help determine gaps in services for veterans with SUDs, including residential rehabilitation treatment, once the assessments are complete. Medication-Assisted Treatment for Opioid Use Disorder Across all 140 VHA health care systems, veterans with an opioid use disorder received medication-assisted treatment (in specialty and non- specialty settings) at a higher rate in urban locations (34 percent) than in rural locations (27 percent) in fiscal year 2018. We also found differences in the availability of medication-assisted treatment services between rural and urban areas: Methadone. The only setting in which methadone may be used to treat an opioid use disorder is an opioid treatment program. All of VHA’s opioid treatment programs are located in urban areas. Only one of the six selected VHA health care systems in our review had an opioid treatment program. Officials from the other five VHA health care systems we spoke with told us they typically referred out to community providers if a veteran needed methadone. Regional VHA officials indicated that some locations, especially rural ones, may not have the number of veterans with opioid use disorder needed to justify the resources required to run an opioid treatment program. Buprenorphine. The number of waivered providers per 1,000 veterans with opioid use disorder was slightly higher in rural areas (29.9 providers) than in urban areas (28.7 providers) in fiscal year 2018. Non-specialist rural providers, such as primary care providers, may feel a greater responsibility to obtain a waiver because there are fewer specialists for them to refer their patients to, according to VHA health care system officials. Despite the similar rates of waivered providers in rural and urban areas, as previously mentioned, rural veterans with opioid use disorder use medication-assisted treatment at a lower rate. VHA Taking Steps to Address Provider Shortage and Access Issues in Rural Areas for Veterans with SUDs VHA requires that all rural and urban health care systems offer the same range of SUD services (specialty or non-specialty). However, rural areas have historically faced difficulties delivering all types of health care, including SUD services, according to literature, agency documents, and VHA health care system officials we spoke with. VHA is taking steps to address several issues that affect the delivery of health care services generally, and SUD services in particular, in rural areas. Shortage of Qualified Providers Officials from three of the six VHA health care systems we interviewed noted a shortage of SUD specialists in their area, including addiction therapists and providers with a waiver to prescribe buprenorphine. According to one study and agency documents we reviewed, veterans may reside in mental health professional shortage areas at a higher rate than the general population, therefore they may have less access to providers qualified to offer medication-assisted treatment or other mental health treatment. One study found that efforts to improve access for veterans in rural areas by purchasing care from community providers may have limited effect, because these areas are relatively underserved generally. Officials from two of the three VHA health care systems in rural areas we selected expressed difficulty hiring and retaining providers to provide SUD services. Because of the shortages, recruiting and retaining providers to deliver care to rural veterans is critical. Based on the literature reviewed and half of VHA health care system officials interviewed, rural communities struggle with recruiting and retaining providers, including SUD providers. Some rural areas report provider shortages with ongoing, long-term vacancies. To respond to these provider shortages and hiring and retaining challenges, VHA has implemented new initiatives and practices to increase the supply of rural health professionals. A VHA official noted that these efforts include rural health training and education initiatives to provide rural health experience to health professions trainees, including those who provide SUD services. The agency also plans to use expanded recruitment tools, like greater access to an education debt reduction program, improved flexibility for bonuses for recruitment, relocation, and retention, as well as piloting a scholarship program authorized under the VA MISSION Act of 2018 to hire mental health professionals. However, recruiting health professionals in rural areas, including mental health providers and social workers, remains an issue for VHA and the community at large, and VHA officials noted that data are not yet available to understand the long-term effect of the newly trained providers on the availability of SUD services. Availability and Use of Telehealth Services for the Delivery of SUD Services Officials from two VHA health care systems we interviewed noted that providing services, such as medication-assisted treatment, through telehealth technology is difficult, especially when the SUD service requires monitoring for medication compliance. However, a VHA official told us the use of telehealth services overall has grown exponentially at VHA’s health care systems and goes beyond traditional video conference capabilities to include advanced technology that can be attached to computers or videoconference equipment like an exam camera to allow for an interactive examination. The official added that the provision of SUD services using telehealth can be supported by medical personnel located at the closest VA facility to complete necessary tests, such as urine screening, when the service is provided at a VHA location. VHA officials from one health care system we spoke with and literature noted that providing medication-assisted treatment via telehealth technologies requires a cultural change within the profession. Officials from one VHA health care system we spoke with told us that delivering medication-assisted treatment using technology is risky. For example, buprenorphine is a controlled substance with a risk of misuse. This official added that many providers may not be open to the idea of delivering this level of treatment using telehealth. One study we reviewed confirmed that acceptance within the profession appears to be the main barrier to the successful implementation of telehealth services. However, VHA’s budget and strategic plan show continued support for the use of telehealth for SUD treatment. Studies have shown that telephone services, a type of telehealth service, potentially have the same outcomes as in-person services. Officials from all six VHA health care systems we selected mentioned they had mental health telehealth services available to facilitate the delivery of care to veterans in both urban and rural areas for SUD services. To ensure adequate access to care, VHA has multiple telehealth initiatives underway. For example, between fiscal years 2017 and 2019, VHA allocated $28.5 million for mental health telehealth hubs at 11 sites. In another instance, VHA allocated more than $750,000 for rural facilities in fiscal years 2018 and 2019 toward a nationwide initiative to improve participation in a program that establishes video connections in the homes of rural veterans to receive mental health treatment, including for SUDs, with psychotherapy and psychopharmacology. While VHA has initiatives underway, the success of these efforts is contingent on rural areas having broadband and internet connectivity, which remains a challenge, according to agency documents and officials. Access to Necessary Prescriptions VHA’s Clinical Practice Guidelines for SUDs recommends methadone and buprenorphine, among others drugs, to treat opioid use disorder. However, accessing these drugs in rural areas can be challenging, according to literature we reviewed and VHA officials we spoke with. For example, one national study found that opioid treatment programs providing methadone are generally absent from the treatment options in rural areas. Within VHA, all of the opioid treatment programs are in urban areas. In addition, in rural areas generally, a small percentage of providers nationwide have received waivers to prescribe buprenorphine. VHA officials told us they are steadily expanding the availability of medication-assisted treatment for veterans with opioid use disorder. VHA had an interdisciplinary team of VA staff from a single facility within each region receive training on implementing medication-assisted treatment for opioid use disorder. These teams were responsible for spreading information to other facilities. Thus far, VHA reported it has trained over 300 providers using this model. In a separate initiative, a VHA official reported that its Office of Rural Health provided over $300,000 in fiscal year 2019 for a pilot program that trains primary care and mental health providers in the Iowa City VHA health care system on how to provide medication-assisted treatment for opioid use disorder. Transportation The availability of transportation is vital for veterans receiving medication- assisted treatment due to the necessity for frequent travel to the VHA health care systems for treatment. When using methadone for opioid use disorder treatment, the medication generally needs to be administered through an opioid treatment program at a specific location on a daily basis. In addition, during the initial stages of buprenorphine treatment, patients must also come into a facility frequently. Veterans living in rural areas who need this level of care may have to travel long distances every day to receive this medication. Distance and lack of transportation impede access to care, including SUD services, for rural veterans. Specifically, the literature we reviewed noted distance, time, and access to transportation as barriers to care. Veterans may lack access to transportation or are no longer able to drive because of age, health status, or driving restrictions. Some rely on family, friends or vans available through community service organizations; however, they may have other difficulties like reaching pick-up locations or the organization not having vans that are wheelchair-equipped. Officials from all six VHA health care systems we selected noted the lack of transportation as a barrier to accessing SUD services. Officials from two rural locations of the six selected VHA health care systems mentioned that volunteers, including a local veteran service organization, assist with getting veterans from their homes to their appointments; however, they added that these services operate on an abbreviated schedule and veterans are sometimes subjected to riding in the vehicle for long periods of time (2 hours each way). Over the last 10 years, a VHA official told us that the agency has allocated between $10 and $12.9 million for its Veterans Transportation Service for new vehicles, drivers, and mobility managers to assist with rural transportation needs. Additional VHA Plans to Address Rural Health Issues for SUD Services The VA MISSION Act of 2018 includes provisions that specifically address the need to improve veterans’ access to health care in areas with shortages of health care providers, including those providing SUD and mental health services. Based on this legislation, in June 2019, VHA published a plan organized in three areas: increasing personnel, using technology to connect veterans to care through public and private partnerships, and expanding VHA’s infrastructure through the building or acquiring of space to address the problem of underserved facilities. For example, VHA has a pilot program with 11 Walmart sites and 15-20 additional sites planned with Philips Healthcare, the Veterans of Foreign Wars, and the American Legion to enable veterans who lack the necessary technology in their home and live far from a VHA facility to receive remote health care at a convenient location. VHA’s plan indicates that while all VHA health care systems can use any of the strategies covered under this legislation, they will provide specific additional technical assistance for underserved facilities, monitor the effectiveness of these strategies, and share the findings of this work throughout the broader VHA system. Agency Comments We provided a draft of this report to VA for review and comment. VA provided written comments, which are reprinted in appendix IV, and technical comments, which we incorporated as appropriate. VA’s comments note that the agency generally reports obligations and that the agency is unable to confirm some of our financial data. However, the data provided by VA during the course of this engagement were regarding expenditures, and thus we report them as such. VA’s comments also provide information on additional efforts to expand mental health telehealth and ways the agency recruits providers in rural areas. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at deniganmacauleym@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Map of 140 Veterans Health Administration Health Care Systems, Fiscal Year 2018 Figure 4, an interactive graphic, shows the location and rurality of the Veterans Health Administration’s health care systems, as well as information on veterans treated by these health care systems. For an accessible version of the data used in this map, see https://www.gao.gov/products/GAO-20-35. Appendix II: Site Selection Methodology and Selected Health Care System Characteristics To describe any differences between veterans’ use of substance use disorder (SUD) services in rural and urban areas and the issues affecting access to those services in rural areas, we selected six Veterans Health Administration (VHA) health care systems and interviewed officials regarding their SUD services and issues serving veterans with SUDs. Because opioid use disorders may pose a greater risk to veterans than the general population, we selected the six VHA health care systems from among those with the highest percentages of veterans with an opioid use disorder diagnosis in fiscal year 2018. We also selected these six health care systems to achieve variation in representation among VHA’s five geographic regions and to include both urban and rural locations. See table 6. Appendix III: Veterans Health Administration Substance Use Disorder Residential Rehabilitation Treatment Programs The Veterans Health Administration had 67 residential rehabilitation treatment programs dedicated to substance use disorder treatment in fiscal year 2018. See table 7. Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Mary Denigan-Macauley, (202) 512-7114 or deniganmacauleym@gao.gov. Staff Acknowledgments In addition to the contact named above, Lori Achman, Assistant Director; Hannah Marston Minter and Carolina Morgan, Analysts-in-Charge; Sam Amrhein; Amy Andresen; Shaunessye D. Curry; and John Tamariz made key contributions to this report. Also contributing were Giselle Hicks, Diona Martyn, Ethiene Salgado-Rodriguez, and Emily Wilson Schwark. Related GAO Products Veterans Health Care: Opportunities Remain to Improve Appointment Scheduling within VA and through Community Care, GAO-19-687T. Washington, D.C.: July 24, 2019. VA Health Care: Estimating Resources Needed to Provide Community Care, GAO-19-478. Washington, D.C.: June 12, 2019. Drug Policy: Assessing Treatment Expansion Efforts and Drug Control Strategies and Programs, GAO-19-535T. Washington, D.C.: May 9, 2019. Priority Open Recommendations: Department of Veterans Affairs, GAO-19-358SP. Washington, D.C.: March 28, 2019. Behavioral Health: Research on Health Care Costs of Untreated Conditions is Limited, GAO-19-274. Washington, D.C.: Feb. 28, 2019. Veterans Choice Program: Improvements Needed to Address Access- Related Challenges as VA Plans Consolidation of its Community Care Programs, GAO-18-281. Washington, D.C.: June 4, 2018. VA Health Care: Progress Made Towards Improving Opioid Safety, but Further Efforts to Assess Progress and Reduce Risk Are Needed, GAO-18-380. Washington, D.C.: May 29, 2018. Opioid Use Disorders: HHS Needs Measures to Assess the Effectiveness of Efforts to Expand Access to Medication-Assisted Treatment, GAO-18-44. Washington, D.C.: October 31, 2017. Opioid Addiction: Laws, Regulations, and Other Factors Can Affect Medication-Assisted Treatment Access, GAO-16-833. Washington, D.C.: September 27, 2016. VA Health Care: Reliability of Reported Outpatient Medical Appointment Wait Times and Scheduling Oversight Need Improvement, GAO-13-130. Washington, D.C.: December 21, 2012.
Why GAO Did This Study Substance use and illicit drug use are a growing problem in the United States. SUDs occur when the recurrent use of alcohol or drugs causes significant impairment, such as health problems. The veteran population has been particularly at risk. Veterans are 1.5 times more likely to die from opioid overdose than the general population, according to VA and Centers for Disease Control and Prevention data. Furthermore, veterans live in rural areas at a higher rate than the general population, which may affect their ability to access SUD services. VA is the largest integrated health care system in the United States, providing care to about 6.2 million veterans. VA provides SUD services through outpatient, inpatient, and residential care settings and offers various treatment options, including individual and group therapy, medication-assisted treatment, and naloxone kits to reverse overdoses. Senate Report 115-130 included a provision for GAO to study VA's capabilities to treat veterans with SUDs. This report describes (1) trends in the number of and expenditures for veterans receiving SUD services, including specialty SUD services; and (2) any differences between veterans' use of SUD services in rural and urban areas, and the issues affecting access to those services in rural areas. GAO reviewed VA policies and data from fiscal years 2014 through 2018. GAO also interviewed officials from six VA health care systems, selected for their high percentage of veterans with an opioid use disorder and to achieve variation in geography and locations VA has designated as urban and rural. VA provided technical comments, which GAO incorporated as appropriate. What GAO Found The Department of Veterans Affairs (VA) treated 518,570 veterans diagnosed with a substance use disorder (SUD) in fiscal year 2018, a 9.5 percent increase since fiscal year 2016. Of these, 152,482 veterans received specialty SUD services in fiscal year 2018, a number that has remained relatively unchanged since fiscal year 2014. Specialty SUD services are those provided through a clinic or program dedicated to SUD treatment. Expenditures for VA's specialty SUD services increased from about $552 million in fiscal year 2014 to more than $600 million in fiscal year 2018. In the same year, VA expended about $80 million to purchase SUD services from non-VA community providers for more than 20,000 veterans, an increase since fiscal year 2014. The number receiving this care from non-VA providers may include veterans who also received services in VA facilities. Note: Specialty SUD services are those provided through a clinic or program dedicated to substance use disorder treatment. SUD services include services provided by any type of provider. VA data show that overall there was little difference in the percentage of veterans using SUD services, including specialty services, in rural and urban areas in fiscal year 2018. However, there were differences for some specific services. For example, in rural areas, 27 percent of veterans with an opioid use disorder received medication-assisted treatment—an approach that combines behavioral therapy and the use of medications—compared to 34 percent in urban areas. In providing SUD services in rural areas, VA faces issues similar to those faced by the general population, including lack of transportation. The agency is taking steps to address these issues, such as using local service organizations to transport veterans for treatment.
gao_GAO-20-638T
gao_GAO-20-638T_0
Longstanding Problems in VA Acquisition Management and Medical Supply Management Posed Additional Challenges in VA’s COVID-19 Response The issues VA experienced during the height of the COVID-19 pandemic were a result of global supply chain challenges, but longstanding problems that our work has previously identified posed additional challenges to VA’s response. In November 2017, we reported weaknesses in VA’s implementation of its MSPV-NG program—VA’s primary means for purchasing medical supplies. These included the lack of an effective medical supply procurement strategy, clinician involvement, and reliable data systems. We also found that several of VA’s medical supply management practices were not in line with those employed by private sector leading hospital networks. We recommended, among other things, that VA develop, document, and communicate to stakeholders an overarching strategy for the program. This strategy, originally planned for completion by December 2017, was delayed to March 2019, and then further delayed due to VA’s implementation of its new MSPV 2.0 program, which is also delayed. We also found that VA’s initial formulary consisted of around 6,000 items at launch, and, according to senior VA contracting officials, many items on the formulary were not those needed by medical centers. These factors resulted in an initial formulary that did not meet the needs of VA’s medical centers (VAMC). The MSPV-NG program office subsequently took steps to expand the formulary, growing it to over 22,000 items, and is developing the next iteration of the program, called MSPV 2.0. MSPV 2.0 is intended to address some of the shortfalls we previously identified in MSPV-NG, including more than doubling the number of items on the formulary, to a planned 49,000. VA’s MSPV 2.0 prime vendor procurement has been subject to multiple bid protests. After three protests challenged the terms of the solicitation, VA responded by voluntarily taking corrective action and revising the solicitation. The terms of the revised solicitation were challenged in a subsequent protest that was sustained, resulting in VA further revising the solicitation to address the matter. Because of these events, agency officials told us that VA has altered its MSPV 2.0 procurement plans several times and there has been significant delay in program implementation from the originally planned March 2020 date to as late as February 2021. Based on preliminary observations of our ongoing work, some of the current MSPV-NG challenges persist and may not be remedied by MSPV 2.0. Specifically, medical center staff we interviewed from May 2019 through October 2019 cited continued problems with consistently receiving the supplies they order through MSPV-NG, such as backorders on frequently ordered items. For example, preceding the COVID-19 pandemic, supply chain problems with one of VA’s prime vendors created supply shortages for infection control gowns, and staff at one VAMC we visited in June 2019 had to obtain gowns from its emergency cache as a temporary measure. Further, VA’s plans for MSPV 2.0 give no indication that they will update their practice of manually maintaining the formulary using spreadsheets, which, based on our discussions with several VAMC logistics officers, can lead to errors such as inadvertent omission of items from the formulary. We plan to issue a report on our review of the MSPV 2.0 program in fall 2020. VA’s Antiquated Inventory Management System Limited VA Management’s Ability to Oversee Real-Time Supply Data at Its 170 Medical Centers According to senior VA procurement and logistics officials interviewed during our ongoing review of VA’s COVID-19 procurement for critical medical supplies, VA experienced difficulty obtaining several types of supplies needed to protect its front-line workforce during the COVID-19 response, ranging from N95 masks to isolation gowns. According to senior VA acquisition and logistics officials, beginning in late February to early March 2020, VA requested that medical centers provide daily updates via spreadsheets to try to obtain the most real time information possible on the levels of PPE on hand, usage, and gaps. These spreadsheets, which were reported manually on a daily basis from each of the VAMCs, were the primary means by which Veterans Health Administration (VHA) leadership obtained detailed information on the stock of critical supplies at its VAMCs in real-time. The insight provided by these spreadsheets was not something that VHA leadership had in any type of ongoing or systematic way, prior to the COVID-19 pandemic. In April 2020, VA developed an automated tool to manage this reporting process, but, according to officials, the information must still be gathered and manually reported by each of the 170 VAMCs on a daily basis. In May 2019, the VA Inspector General found that proper inventory monitoring and management was lacking at many VAMCs, noting that inventory management practices ranged from inaccurate to nonexistent. In 2013, we also reported on weaknesses in VA’s inventory management systems and made recommendations to VA to evaluate its efforts to improve in this area. However, our preliminary observations from our ongoing review of VA’s MSPV program indicate that VA will likely rely on its antiquated system for the foreseeable future. Specifically, VA plans to transition to the Defense Logistics Agency’s (DLA) inventory management system, called Defense Medical Logistics Standard Support (DMLSS). DMLSS serves as DLA’s primary MSPV ordering system and supports DLA’s inventory management, among other things. According to DLA officials, DMLSS produces data that VAMCs could use to analyze their order history and find recommendations for future purchases. VA’s implementation schedule shows that it will take seven years to roll out DMLSS and its successor at all VAMCs. In the near-term, VA had planned to implement DMLSS at three medical centers in mid-to-late 2019. However, due to technology integration issues between VA’s financial system and the DMLSS system, implementation at these three VAMCs is delayed. According to the Chief Supply Chain Officer at one of these VAMCs, the original DMLSS implementation date has changed several times from an initial start date of August 2019, which may be delayed to at least October 2020. VA uses a “just in time” inventory supply model—a practice employed by many hospital networks where only limited stock is maintained on-site. However, for this model to succeed, VA needs both visibility into current stock and consistent deliveries from the MSPV-NG program. Based on our preliminary observations, VA faces challenges with both visibility and delivery. VA acquisition leadership has recognized the shortcomings in its medical supply chain management, and has identified supply chain modernization as a priority. As part of our ongoing review of VA’s MSPV program, we reviewed VHA’s Modernization Campaign Plan, dated March 2019, and VHA’s Modernization Plan briefing slides, dated February 2020, which describe several modernization initiatives including MSPV 2.0 and DMLSS. VHA’s February 2020 update on its modernization effort identified both its DMLSS deployment and MSPV 2.0 program at critical risk of not meeting system modernization milestones. VA’s COVID-19 Emergency Procurement Included Various VA Contracting Organizations and Mechanisms Based on our preliminary observations from our ongoing review of VA’s procurement of critical medical supplies, in response to COVID-19, VA is using various existing and new contracting organizations and mechanisms to try to meet its PPE needs. These include using national and regional contracting offices to procure supplies and services, and using existing contract vehicles and new sources. In response to the pandemic, VA’s Office of Acquisition and Logistics also issued a memorandum on March 15, 2020, to implement emergency flexibilities available under the Federal Acquisition Regulation, such as increasing the micro-purchase threshold to $20,000. Our analysis of contracting activity in the Federal Procurement Data System-Next Generation (FPDS-NG) indicates that VHA’s Network Contracting Offices—which support the various regions of VA’s hospital network—increased their supply purchases, mostly by entering into new contracts. Department-wide contracting organizations that would normally not make individual supply purchases—such as VHA’s Program Contracting Activity Central and VA’s Strategic Acquisition Center—also played a substantial role. In addition, logistics staff at VAMCs continued to use the MSPV-NG program to order supplies. VA had existing clauses in MSPV-NG contracts that established terms for the suppliers to maintain support to VA in the event of a catastrophe. But, according to senior VA acquisition officials, because those suppliers faced the same shortages in the broader market, they were not able to provide enough supplies to meet VA’s surging demand. Figure 1 shows the COVID-19-related contract obligations, from March 13, 2020 through June 3, 2020, made by the various VA contracting offices. These obligations include both supplies, such as PPE, and services, such as information technology systems to support telemedicine. Our analysis of preliminary data on orders placed directly by VAMC staff for COVID-19-related items found that, in April 2020, the value of VA’s reported COVID-19-related purchases through the MSPV-NG program began to decrease relative to the values reported in prior months. According to senior VA acquisition and logistics officials, in part, because MSPV-NG and other existing VA supply contracts and agreements did not meet VA’s needs, its acquisition workforce had to make purchases through other contracting mechanisms, such as micro-purchases using government purchase cards, to fill the gap. Between March 13, 2020 and June 3, 2020 VA obligated more than 51 percent ($687 million) of the $1.3 billion it spent on products and services for the COVID-19 response through purchases made outside the MSPV-NG program and other established VA contracting mechanisms. About 27 percent of this $1.3 billion ($364 million) was for veteran-owned small business set-aside purchases, under VA’s Veterans First program. VA Collaborated with the Federal Emergency Management Agency (FEMA) in Response to COVID-19 On April 17, 2020, VA placed its first supply requests through the Federal Emergency Management Administration’s (FEMA) Strategic National Stockpile program, according to VA senior acquisition and logistics officials. As of June 5, 2020, according to information provided by the VA, it had received shipments of several different types of supplies through FEMA from these requests, as shown in Table 1. According to VA senior procurement and logistics officials, VA’s Emergency Management Center has an existing relationship with FEMA. However, these senior procurement and logistics officials noted that VA support services officials—who had primary responsibility for requesting medical items through FEMA—did not have an existing relationship with FEMA or a process in place prior to the COVID-19 pandemic for placing medical supply requests through FEMA. Officials said that this led to a brief, initial delay in processing VA’s first request. In summary, VA experienced many of the same challenges obtaining medical supplies as most private sector hospitals and other entities in responding to this devastating pandemic. This situation put stress on an already overburdened acquisition and logistics workforce—resulting in staff initially scrambling to address supply chain shortfalls while simultaneously working with VA’s antiquated inventory system, through manual, daily reports on PPE levels to VA leadership. While VA has made progress in addressing some of the issues that have led us to identify VA acquisition management as high risk, it will take many years for VA to put in place a modern supply chain management system that would position it to provide the most efficient and effective service to our nations veterans. Chairman Moran, Ranking Member Tester, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Shelby S. Oakley at 202-512-4841 or OakleyS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Lisa Gardner, Assistant Director; Teague Lyons, Assistant Director; Daniel Singleton, Analyst-in-Charge; Jeff Hartnett, Nicolaus Heun, Kelsey M. Carpenter, Sara Younes, Matthew T. Crosby; Suellen Foth, Lorraine Ettaro, Rose Brister, Susan Ditto, Roxanna Sun, Carrie Rogers, and Helena Johnson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study VA spends hundreds of millions of dollars annually to meet the health care needs of about 9 million veterans. In March 2019, GAO added VA Acquisition Management to its High Risk list due to longstanding problems such as ineffective purchasing of medical supplies and lack of reliable data systems. This statement summarizes findings from GAO's 2017 MSPV-NG report and 2019 High Risk report and preliminary observations from two ongoing GAO performance audits to discuss VA's progress in building a more resilient supply chain. For the ongoing work, GAO reviewed VA documentation and interviewed VA officials, and VA medical center staff. Finally, GAO met with senior VA officials on June 5, 2020, to obtain agency views on the new observations GAO discusses in this statement. What GAO Found The Department of Veterans Affairs (VA) has taken some steps in recent years to modernize its processes to acquire hundreds of millions of dollars-worth of medical supplies annually. However, implementation delays for key initiatives, including a new, enterprise-wide inventory management system, limit VA's ability to have an agile, responsive supply chain. Prior to the Coronavirus Disease 2019 (COVID-19) pandemic, in November 2017 and in GAO's High-Risk report in March 2019, GAO reported on weaknesses in VA's acquisition management. For example, GAO reported that VA's implementation of its Medical-Surgical Prime Vendor-Next Generation (MSPV-NG) program—VA's primary means for purchasing medical supplies—lacked an effective medical supply procurement strategy, clinician involvement, and reliable data systems. GAO also found that several of VA's medical supply management practices were not in line with those employed by private sector leading hospital networks. VA is developing another iteration of its MSPV program, called MSPV 2.0, which GAO's preliminary observations show is intended to address some of the shortfalls GAO has identified in its past and ongoing program reviews. In November 2017, GAO recommended that VA develop, document and communicate an overarching MSPV-NG strategy—to include how the program office will prioritize categories of supplies and increase clinician involvement in this process. Preliminary observations from GAO's ongoing work indicate that VA has taken some steps, as it implements MSPV 2.0, to address this priority recommendation. However, GAO's preliminary observations also indicate that the MSPV 2.0 program implementation is delayed and some of these existing program challenges may not be remedied. Based on preliminary observations from GAO's ongoing work, VA's implementation of a new supply and inventory management system is delayed. As a result, VA had to rely on an antiquated inventory management system, and initial, manual spreadsheets to oversee the stock of critical medical supplies at its medical centers. This limited the ability of VA management to have real-time information on its pandemic response supplies, ranging from N95 face masks to isolation gowns, to make key decisions. As of April 2020, VA has an automated tool to manage its reporting process, but the information must be gathered and manually reported by each of VA's 170 medical centers on a daily basis. GAO's preliminary observations also show that in response to COVID-19, VA is using various contracting organizations and mechanisms to meet its critical medical supply needs. These include using national and regional contracting offices to obtain supplies from existing contract vehicles, new contracts and agreements, and the Federal Emergency Management Administration's Strategic National Stockpile to respond to the pandemic. What GAO Recommends GAO has made 40 recommendations since 2015 to improve acquisition management at the VA. VA agreed with those recommendations and has implemented 22 of them. Further actions are needed to implement the remaining recommendations, such as GAO's recommendation that VA implement an overarching MSPV strategy, and demonstrate progress toward removing this area from GAO's High-Risk list.
gao_GAO-20-474
gao_GAO-20-474_0
Background Designating Federal Programs as High Risk Since the early 1990s, our high-risk program has focused attention on government operations with greater vulnerabilities to fraud, waste, abuse, and mismanagement, or that are in need of transformation to address economy, efficiency, or effectiveness challenges. To determine which federal government programs and functions should be designated high risk, we use our guidance document, Determining Performance and Accountability Challenges and High Risks. We consider qualitative factors, such as whether the risk (1) involves public health or safety, service delivery, national security, national defense, economic growth, or privacy or citizens’ rights; or (2) could result in significantly impaired service, program failure, injury or loss of life, or significantly reduced economy, efficiency, or effectiveness. We also consider the exposure to loss in monetary or other quantitative terms. At a minimum, $1 billion must be at risk, in areas such as the value of major assets being impaired; revenue sources not being realized; major agency assets being lost, stolen, damaged, wasted, or underutilized; potential for, or evidence of improper payments; and presence of contingencies or potential liabilities. Before making a high-risk designation, we also consider corrective measures that are planned or under way to resolve a material control weakness and the status and effectiveness of these actions. We release a High-Risk Series report every two years at the start of each new Congress. Our biennial reports detail progress made on previously designated high-risk issues. We designate any new issue areas we identify as high risk, based on the above criteria, in these reports or in separate products outside of the two-year cycle. We make out-of-cycle designations—as has been the case for seven other high-risk designations we have made—to highlight urgent issues, help ensure focused attention, and maximize the opportunity for the federal government to take action. National Drug Control Program Agencies The Office of National Drug Control Policy (ONDCP) was established by the Anti-Drug Abuse Act of 1988 as a component of the Executive Office of the President, and its Director is to assist the President in the establishment of the policies, goals, objectives, and priorities for the National Drug Control Program. In October 2018, the SUPPORT Act, among other things, reauthorized ONDCP and amended its authorities. ONDCP is responsible for (1) leading the national drug control effort, (2) coordinating and overseeing the implementation of national drug control policy, (3) assessing and certifying the adequacy of National Drug Control Programs and the budget for those programs, and (4) evaluating the effectiveness of national drug control policy efforts. As part of these efforts, ONDCP is to coordinate with more than a dozen federal agencies—known as National Drug Control Program agencies— that have responsibilities for activities including education and prevention, treatment, and law enforcement and drug interdiction (see fig. 1). Within these agencies, there may be components or offices that handle specific aspects of drug control. Some examples include SAMHSA and CDC within HHS, and the Drug Enforcement Administration (DEA) within the Department of Justice. Rates of Drug Misuse and Drug Overdose Deaths Have Generally Increased in the United States Rates of drug misuse and drug overdose deaths have generally increased in the United States. Nationally representative data show that this increase in the estimated rate of drug misuse has occurred across several demographic categories such as sex and education levels. Nationally, the rate of drug overdose deaths decreased in 2018 after increasing almost every year since 2002. Drug overdose death rates vary by region and by different types of drugs. Drug Misuse Has Increased in the United States and Affected People across a Range of Demographics Drug misuse—the use of illicit drugs and the misuse of prescription drugs—has generally increased in the United States since 2002. According to SAMHSA, estimates of self-reported drug misuse among people aged 12 or older increased from 14.9 percent in 2002 to 16.7 percent in 2014, and then further increased from 17.8 percent in 2015 to 19.4 percent in 2018 (see fig. 2). The increase in estimated drug misuse from 2015 to 2018 by people aged 12 or older is evident in people across a broad range of demographic groups, including sex, race or ethnicity, military veterans, income and education levels, employment status, and geographic categories, with few exceptions (see figures 3 through 5). Additionally, the estimated percentage of drug misuse within certain demographic groups increased for some years and decreased for others, in every year more than 10 percent of the people in every demographic group reported misusing drugs. The National Rate of Drug Overdose Deaths Increased between 2002 and 2018 The rate of drug overdose deaths in the United States increased between 2002 and 2018 (see fig. 6). For context, in 2002, there were 23,518 drug overdose deaths, and in 2018, there were 67,367 drug overdose deaths, according to CDC data. Furthermore, the rate of drug overdose deaths increased more rapidly in recent years; the rate increased on average by 2 percent per year from 2006 through 2013, and by 14 percent per year from 2013 through 2016; however, the rate decreased by 4.6 percent between 2017 and 2018. Regional Rates of Drug Overdose Deaths Varied Across the Nation Rates of drug overdose deaths varied in counties across the nation in 2003 and 2017, the most recent year that county-level data were available (see fig. 7). In 2017, there were some areas of the country with high rates of drug overdose deaths. For example, in 2017, 1,354 counties (43.2 percent of counties) had estimates of more than 20 drug overdose deaths per 100,000 people, including 448 counties with rates that were significantly higher than this amount. Rates of Overdose Deaths Increased for Multiple Drug Types between 2002 and 2018 The rate of overdose deaths for different types of drugs increased between 2002 and 2018. Rates of drug overdose deaths involving synthetic opioids, natural and semi-synthetic opioids, methadone, heroin, cocaine, benzodiazepines, psychostimulants, and antidepressants generally increased between 2002 and 2018 (see fig. 8). It is important to note that drug overdose deaths may involve more than one drug, and the drugs most frequently involved in overdose deaths were often found in combination with each other. The most common drugs involved in overdose deaths vary in different parts of the United States, according to data for each of the 10 HHS public health regions (see fig. 9). Generally, in eastern regions, fentanyl was the most common drug involved in overdose deaths in 2017, the most recent year that data were available, whereas methamphetamine was the most common drug involved in overdose deaths in western regions. As previously discussed, many drug overdose deaths involve more than one drug. Negative Effects of Drug Misuse Are Widespread and Cost Billions Past GAO work, as well as other selected government and academic studies, have found that drug misuse results in high costs for society and the economy. Such costs vary and include health care costs, criminal justice costs, workplace productivity costs, education costs, human services costs, and mortality costs. Figure 10 below includes examples of costs and other effects of drug misuse in these areas. These costs are born by federal, state, and local governments; private businesses and nonprofit organizations; employers; families, and individuals who misuse drugs. While selected studies we reviewed provided estimates for some of the costs of drug misuse, one study also indicated it is difficult to precisely quantify these costs. For example, concepts such as the quality of life or the pain and suffering of family members are difficult to fully capture or quantify. Challenges Impede National Efforts to Prevent, Respond to, and Recover from the Drug Crisis Our recent work on the topic of drug misuse and its effects has highlighted challenges the federal government faces that impede national efforts to address the drug crisis. We categorized these challenges as related to sustained leadership and strengthened coordination; capacity to address the crisis; and measurement, evaluation, and demonstration of progress. In the course of our work on the topic of drug misuse, we have identified many actions that if taken could help to address challenges in each of these areas, and have made specific recommendations to federal agencies about these actions. While over 25 of these recommendations have been implemented by National Drug Control Program agencies since fiscal year 2015, over 60 of our recommendations to at least 10 federal agencies—including recommendations that have received our highest priority designation— have not yet been implemented as of February 2020. The information below describes our findings and how agencies’ inaction on our recommendations has contributed to the federal government’s lack of progress in addressing the drug crisis. Sustained leadership and strengthened coordination. Making progress in high-risk areas requires demonstrated, strong, and sustained commitment and coordination, which we have found to be a challenge facing the federal government’s drug control efforts. Our work has identified the need for ONDCP to improve its efforts to lead and coordinate the national effort to address drug misuse and for agency leaders to engage in more effective coordination across the government and with stakeholders. ONDCP has a responsibility to coordinate and oversee the implementation of the national drug control policy across the federal government, and the National Drug Control Program agencies also have important roles and responsibilities that involve reducing drug misuse and mitigating its effects. ONDCP’s responsibility to develop the National Drug Control Strategy offers the office an important opportunity to help prioritize, coordinate, and measure key efforts to address the drug crisis. Our work has shown that ONDCP can improve its efforts to develop a National Drug Control Strategy that meets statutory requirements and effectively coordinates national efforts to address drug misuse. In 2017 and 2018, ONDCP lacked a statutorily required National Drug Control Strategy, and we recently reported that the 2019 National Drug Control Strategy did not fully comply with the law. In December 2019, we recommended that ONDCP develop and document key planning elements to help ONDCP structure its ongoing efforts and to better position the agency to meet these requirements for future iterations of the National Drug Control Strategy. ONDCP subsequently issued the 2020 National Drug Control Strategy on February 3, 2020. We reviewed this Strategy and found that it made progress in addressing several statutory requirements. For example: The 2020 National Drug Control Strategy includes 17 annual quantifiable and measurable objectives and specific targets, such as reducing overdose deaths by 15 percent by 2022, whereas we found that the 2019 National Drug Control Strategy did not contain such annual targets. The 2020 Strategy also includes a description of how each of the Strategy’s long-range goals was determined, including required consultations and data used to inform the determination, and a list of anticipated challenges to achieving the Strategy’s goals, such as limitations in existing data systems that provide little insight into emerging patterns of drug misuse, and planned actions to address them. However, the 2020 Strategy fell short in meeting other requirements. For example, the 2020 Strategy does not include a list of each National Drug Control Program agencies’ activities and the role of each activity in achieving the Strategy’s long-range goals, as required by law. The federal government invests billions of dollars each year in programs spanning over a dozen agencies, and therefore the development and implementation of a comprehensive Strategy is critical to guiding and ensuring the effectiveness of federal activities to address drug misuse. In December 2019, we recommended that ONDCP routinely implement an approach to meet the requirements for future Strategy iterations, and ONDCP agreed. ONDCP is uniquely situated to promote coordination across federal agencies. For example, the National Drug Control Strategy is required to include a description of how each of the Strategy’s long-range goals will be achieved, including a list of each existing or new coordinating mechanism to achieve each goal and a description of ONDCP’s role in facilitating achievement of each goal. The 2020 Strategy partially addressed these required elements. By including these descriptions in future iterations of the Strategy and effectively implementing them, ONDCP has the potential to strengthen coordination and provide sustained leadership. ONDCP has previously used its unique position to help implement some of our recommendations aimed at improving coordination across federal agencies in their efforts to prevent and respond to drug misuse. For example, ONDCP implemented our recommendation to assess the extent of overlap and potential for duplication across federal programs engaged in drug abuse prevention and treatment activities and to identify opportunities for increased coordination as well as developed performance metrics and reporting data regarding field-based coordination to prevent drug trafficking. We have also reported on the lack of available treatment programs for pregnant women and newborns with neonatal abstinence syndrome as well as gaps in research related to the treatment of prenatal opioid use. As of February 2020, ONDCP implemented our recommendation to document the process the agency uses to identify gaps and action items to track federal activities related to prenatal opioid use and neonatal abstinence syndrome. Sustaining and building on these coordination efforts will help maximize opportunities, leverage resources, and better position ONDCP to identify opportunities for increased efficiencies in preventing and treating drug misuse. National Drug Control Program agencies also have a responsibility to coordinate their efforts, and we have reported that gaps in agency coordination have hindered national drug control efforts. For example, the Department of Homeland Security (DHS), the U.S. Postal Service (USPS), and U.S. Customs and Border Protection (CBP) each have important roles in enforcing certain data-sharing and enforcement requirements of the Synthetics Trafficking and Overdose Prevention Act of 2018 (STOP Act). The STOP Act requires DHS to promulgate regulations detailing additional USPS responsibilities—beyond those included in the Act—related to sharing advance electronic data with CBP that can be used to identify shipments at high risk of transporting illegal drugs by October 24, 2019. However, as of November 2019, DHS had not drafted these regulations, and therefore USPS’s and CBP’s responsibilities for sharing advance electronic data—a key tool that could help stop the flow of illicit drugs into the United States—remain unclear. As we reported in December 2019, DHS does not have a plan for drafting these regulations, and therefore we recommended that DHS develop a timeline to do so; DHS agreed with this recommendation. It is also important for the federal government to coordinate among different levels of government and across issue areas, including with state, local, and tribal agencies, as well as with community groups and organizations in the private sector working to address the drug crisis. Our prior work has also found ways in which coordination between federal efforts to address drug misuse and those of local governments and other stakeholders could be more effective. In January 2018, we reported that states cited the need for additional guidance, training, and technical assistance from HHS to address the needs of infants born with prenatal drug exposure. HHS disagreed with our recommendation to provide such guidance regarding the safe care for substance-affected infants, and has not implemented the recommendation. HHS stated that it had already clarified guidance in this area and believed that states needed flexibility to meet the program requirements in the context of each state’s program. We found that states continued to report issues with the guidance, and that the clarifications did not address an ongoing challenge regarding the program requirements. We continue to believe our recommendation is warranted. As of February 2020, HHS continues to disagree with us and with the states. Without adequate supports and services to ensure their safety, these vulnerable infants may be at risk for child abuse and neglect. We have also recently recommended in January 2020 that DEA should, in consultation with industry stakeholders—such as drug distributors— identify solutions to address the limitations of the ARCOS Enhanced Lookup Buyer Statistic Tool, to ensure industry stakeholders have the most useful information possible to assist them in identifying and reporting suspicious opioid orders to DEA. DEA agreed with our recommendation, and is starting to assess how to address this recommendation. These limitations, including a lack of appropriately detailed data, may limit the usefulness of the tool in assisting distributors in determining whether an order is suspicious. In addition, we have previously reported in 2019 that coordination across private health plans, health-care prescribers, pharmacists, and at-risk beneficiaries could contribute to the success of Medicare drug monitoring programs, which are designed to identify beneficiaries at risk of opioid misuse. We also have ongoing work on how federal departments and agencies coordinate their drug prevention efforts in schools as well as on how effectively federal agencies coordinated their counter-drug activities with Mexico. Capacity to address the crisis. We have identified ongoing challenges related to the nation’s capacity to address the drug crisis. Sufficient capacity and efficient use of that capacity are key components for making progress in high-risk areas; they are necessary for federal, state, and local agencies to achieve strategic goals in addressing drug misuse, such as implementing the National Drug Control Strategy. In our work designating high-risk government programs and functions, we define capacity as having the people and resources sufficient to address the risk. Our prior work has found that the nation faces insufficient capacity to successfully address persistent, troubling trends in drug misuse, including the lack of treatment options. In addition, the nation’s existing capacity may be plagued by inefficiencies and gaps in information about what resources are most effective in addressing drug misuse. These capacity challenges permeate every level of government and affect the nation’s key social services and health care programs. As a result, effectively addressing the drug crisis requires harnessing capacity across agencies within the federal government as well as coordinating with state and local governments and community-based nongovernment organizations. The availability of treatment for substance use disorders has not kept pace with needs, and the federal government has faced barriers to increasing treatment capacity. For example, we have reported on barriers to increasing access to evidence-based treatment for opioid use disorder, and federal efforts to address these barriers. Such barriers to treatment include a lack of Medicaid coverage for treatment medications in some states, delays that can be caused by the need for prior authorizations for some treatment medications, and the unwillingness of some health care providers to obtain the federal waiver required to prescribe some treatment medications. We have also reported that, according to officials at the Veterans Health Administration (VHA), many veterans lack access to residential substance use treatment programs because of high demand relative to capacity. Developing and maintaining sufficient capacity to address the drug crisis also requires that federal agencies use existing resources—such as data—effectively. For example, we have recently reported in January 2020 that DEA should be more proactive in using the data it already collects from DEA registrants to identify problematic drug transaction patterns. According to DEA officials, one analysis that they conduct on a quarterly basis involves using a computer algorithm when comparing large volumes of drugs purchased in a given geographic area to the area’s population data. However, DEA did not report conducting active and recurring monitoring of transactions using algorithms to detect and flag transactions that indicate potential diversion, either on a real-time or near real-time basis, to help identify questionable patterns in the data or unusual patterns of drug distribution on a more routine basis. Such analyses could be used to proactively support or generate leads for investigations of potential drug diversion. Registrants already report data on controlled-substance transactions to the DEA. DEA could use these data to identify trends in distribution or purchases of drugs in a given geographic area. DEA could also look for and compare unusual patterns in drug order activity in different locations to identify potential issues that warrant further investigation. Further, DEA has not established a way to manage all of the data it collects and maintains. DEA agreed with three of our four recommendations to better manage and use the data it collects. DEA neither agreed nor disagreed with the fourth recommendation. However, DEA has not yet implemented any of the recommendations. By implementing these recommendations, DEA could ensure that important data assets are formally managed and fully utilized to inform investigations and prevent diversion of prescription opioids to be sold illegally. Overall, federal efforts to address the drug crisis could make better use of available data to assist in identifying emerging patterns of misuse, allowing the government to respond more quickly to evolving trends. Beyond specific capacity challenges that we have identified, in December 2019 we reported on challenges federal agencies face in assessing the resources they will need to achieve the goals of the National Drug Control Strategy. ONDCP is required to issue drug control funding guidance to the heads of departments and agencies with responsibilities under the National Drug Control Program by July 1 of each year, and such funding guidance must address funding priorities developed in the National Drug Control Strategy. Since ONDCP did not issue a Strategy in 2017 or 2018, ONDCP could neither provide funding guidance to National Drug Control Program agencies based on the Strategy, nor could it review and certify budget requests of these agencies to determine if they are adequate to meet the goals of the Strategy, in accordance with and as required by law. Without a National Drug Control Strategy in 2017 or 2018, ONDCP used other sources—such as policy priorities identified in the President’s Budget from fiscal year 2018—to identify drug policy priorities and develop funding guidance. ONDCP issued a National Drug Control Strategy in 2019 and 2020, but neither Strategy included a 5-year projection for program and budget priorities, as required by law. In December 2019, we recommended that ONDCP develop and document key planning elements to help ONDCP structure its ongoing efforts and to better position the agency to meet these requirements for future iterations of the National Drug Control Strategy. We also found that the 2020 National Drug Control Strategy does not include estimates of federal funding or other resources needed to achieve each of ONDCP’s long-range goals. The 2020 Strategy includes a plan to expand treatment of substance use disorders that identifies unmet needs for substance use disorder treatment and a strategy for closing the gap between available and needed treatment. The plan also describes the roles and responsibilities of relevant National Drug Control Program agencies for implementing the plan. However, the plan does not identify resources required to enable National Drug Control Program agencies to implement the plan or resources required to eliminate the treatment gap, as required by law. The National Drug Control Strategy is important for assessing the nation’s capacity to address drug misuse through both the development of federal funding estimates and the certification of agency budget requests that aim to meet the goals of the Strategy. Additionally, we have ongoing work on the federal government’s capacity to address the drug crisis. For example, we are studying gaps in the capacity of the health care system to treat substance use disorders, and examining how grantees use funding from selected SAMHSA grant programs to increase access to substance use disorder treatment. We are also studying school-based drug prevention programs and the effects of drug misuse on the workforce. This work will examine challenges that states and local educational entities face in serving the needs of communities affected by the drug crisis. We also have planned work examining the effectiveness of federal funding to combat the ongoing opioid crisis. Measurement, evaluation, and demonstration of progress. The federal government faces challenges related to measuring, evaluating, and demonstrating progress towards addressing the crisis. We have reported that key data needed to measure and evaluate progress towards strategic goals are not reliable or are not collected and reported. We have also found that some agencies lack plans or metrics to measure the effectiveness of specific programs to address the drug crisis and to demonstrate that these programs are making progress towards stated national goals, including reducing drug overdose deaths and expanding access to addiction treatment. Successfully addressing drug misuse requires ongoing measurement and evaluation of efforts towards stated goals and the ability to share and use performance information to make midcourse changes and corrections where needed. Regarding challenges related to data, we have identified gaps in the availability and reliability of data for measuring progress. ONDCP and other federal, state, and local government officials have identified challenges with the timeliness, accuracy, and accessibility of data from law enforcement and public health sources related to both fatal and non- fatal overdose cases. In March 2018, we recommended that ONDCP lead a review on ways to improve overdose data; ONDCP did not indicate whether it agreed with our recommendation. Additionally, in December 2019, we found that ONDCP’s Drug Control Data Dashboard did not include all of the data required by the SUPPORT Act, such as data sufficient to show the extent of the unmet need for substance use disorder treatment. We recommended that ONDCP establish the planning elements to ensure that these data were included in the Data Dashboard, and ONDCP disagreed with our recommendation. Having accessible and reliable data, including data on drug overdoses will help ONDCP and other agencies better measure the scope and nature of the drug crisis. We also found in 2019 that the State Department cannot ensure the reliability of its program monitoring data for its Caribbean Basin Security Initiative, which seeks to reduce illicit drug trafficking. The State Department agreed with the recommendation to ensure the development and implementation of a data management system for centrally collecting reliable program monitoring data for all Caribbean Basin Security Initiative activities, but has not yet implemented it. Without this action, there may be discrepancies in how Caribbean Basin Security Initiative program performance data is defined and collected, and the State Department cannot report comprehensively or accurately on the Initiative’s activities to reduce illicit drug trafficking or track data trends across countries. While ONDCP is responsible for evaluating the effectiveness of national drug control policy efforts across the government, we found that ONDCP has not developed performance evaluation plans for the goals in the 2020 National Drug Control Strategy. Some of the long-range goals listed in the 2020 Strategy include expanding access to evidence-based treatment, reducing the availability of illicit drugs in the United States, and decreasing the over-prescribing of opioid medications. However, the 2020 National Drug Control Strategy does not include performance evaluation plans to measure progress against each of the Strategy’s long-range goals, as required by law. These performance evaluation plans are required by statute to include (1) specific performance measures for each National Drug Control Program agency, (2) annual and—to the extent practicable—quarterly objectives and targets for each measure, and (3) an estimate of federal funding and other resources necessary to achieve each performance objective and target. Without effective long-term plans that clearly articulate goals and objectives and without specific measures to track performance, federal agencies cannot fully assess whether taxpayer dollars are invested in ways that will achieve desired outcomes such as reducing access to illicit drugs and expanding treatment for substance use disorders. Additionally, National Drug Control Program agencies are responsible for evaluating their progress toward achieving the goals of the National Drug Control Strategy, and in some cases have improved how to measure this progress. For example, although the federal government continues to face barriers to increasing access to treatment for substance use disorders, HHS has recently implemented our recommendation to establish performance measures with targets to expand access to medication-assisted treatment (MAT) for opioid use disorders. As of March 2020, HHS has established such performance measures with targets to increase the number of prescriptions for MAT medications and to increase treatment capacity, as measured by the number of providers authorized to treat patients using MAT. Monitoring progress against these targets will help HHS determine whether its efforts to expand treatment are successful or whether new approaches are needed. We have also identified challenges regarding how federal agencies demonstrate the progress of specific programs toward addressing the drug crisis. We reported in 2018 on DEA’s 360 Strategy—which aims to coordinate DEA enforcement, diversion control, and demand reduction efforts—as well as on ONDCP’s Heroin Response Strategy under its High Intensity Drug Trafficking Areas program. We found that neither DEA’s 360 Strategy nor ONDCP’s Heroin Response Strategy included outcome- oriented performance measures for its activities and goals, respectively. DEA disagreed with and has not yet implemented our recommendation to establish these types of performance measures for its activities. ONDCP neither agreed nor disagreed with our recommendation to establish outcome-oriented performance measures for the goals of the Heroin Response Strategy, and has not yet implemented the recommendation. Without these measures, it is unclear the extent to which DEA or ONDCP can accurately and fully gauge their efforts and their overall effectiveness in combatting heroin and opioid use and reducing overdose deaths. Additionally, we have found that DEA does not have outcome-oriented goals and performance targets for its use of data in opioid diversion activities, making DEA likely not able to adequately assess whether its investments and efforts are helping to limit the availability of and better respond to the opioid prescription diversion threat. DEA neither agreed nor disagreed with our recommendation to establish these outcome- oriented goals and related performance targets for its opioid diversion activities, and has not implemented this recommendation. We have also reported that the Department of State has not established performance indicators for its Caribbean Basin Security Initiative to facilitate performance evaluation across agencies, countries, and activities, inhibiting the assessment of the program’s progress to reduce illicit drug trafficking. The State Department agreed with our recommendation to develop and implement a data management system for centrally collecting reliable program monitoring data. The State Department has not yet implemented this recommendation. Without robust assessments of how specific programs help to achieve the goals of the National Drug Control Strategy, federal agencies may be unable to demonstrate progress in addressing the drug crisis, and may be unable to make any needed adjustments to their strategies. Concluding Observations Illicit drug use and misuse of prescription drugs is a long-standing national problem that will continue to evolve. The terrible effects of drug misuse on families and communities have persisted over decades, despite ongoing federal, state, and local efforts. Federal agencies and Congress can and must work to ensure that available resources are coordinated effectively to mitigate and respond to the drug misuse crisis. Maintaining sustained attention on preventing, responding to, and recovering from drug misuse will be challenging in the coming months as many of the federal agencies responsible for addressing drug misuse are currently focused on addressing the COVID-19 pandemic. However, the severe public health and economic effects of the pandemic could fuel some of the contributing factors of drug misuse, such as unemployment— highlighting the need to sustain drug misuse prevention, response, and recovery efforts. Addressing these challenges will require sustained leadership and strengthened coordination; the necessary capacity to address the crisis; and the systems to measure, evaluate, and demonstrate progress. The more than 60 related GAO recommendations that have yet to be implemented are an indication of how federal agencies may begin addressing these challenges. For example: ONDCP should ensure future iterations of the National Drug Control Strategy include all statutorily required elements. Examples of statutorily required elements include a 5-year projection for the National Drug Control Program and budget priorities; a description of how each of the Strategy’s long-range goals will be achieved, including a list of each National Drug Control Program agency’s activities, and the role of each activity in achieving these goals, and estimates of federal funding or other resources needed to achieve these goals; performance evaluation plans for each year covered by the Strategy for each long-range goal for each National Drug Control Program agency; and resources required to enable National Drug Control Program agencies to implement the plan to expand treatment of substance use disorders and eliminate the treatment gap; ONDCP should take steps to ensure effective, sustained implementation of the 2020 National Drug Control Strategy and future strategies; HHS should provide guidance to states for the safe care for infants born with prenatal drug exposure, who may be at risk for child abuse and neglect; DEA should take steps to better analyze and use drug transaction data to identify suspicious opioid orders and prevent diversion of prescription opioids to be sold illegally; and the State Department should develop and implement a data management system for all Caribbean Basin Security Initiative activities to reduce illicit drug trafficking or track data trends across countries. Through our ongoing and planned work, we will continue to review the effects of drug misuse, the federal response, and opportunities for improvement. Agency Comments and Our Evaluation We provided draft report excerpts regarding our analysis of the 2020 National Drug Control Strategy to the Office of National Drug Control Policy for review and comment. ONDCP officials stated that they plan to address the statutory requirements that we identified as missing in additional documents, including the Fiscal Year 2021 Budget and Performance Summary. We will review and assess any additional materials that ONDCP publishes in response to the requirements for the 2020 National Drug Control Strategy. Findings regarding other programs and activities are drawn from past GAO work and our follow-up work on our recommendations; the related content was previously provided to the respective agencies for review as part of the original work. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of National Drug Control Policy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Triana McNeil at (202) 512-8777 or McNeilT@gao.gov, Mary Denigan- Macauley at (202) 512-7114 or DeniganMacauleyM@gao.gov, or Jacqueline M. Nowicki at (617) 788-0580 or NowickiJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: List of Selected Related GAO Reports Fiscal Year 2020 Reports International Mail: Stakeholders' Views on Possible Changes to Inbound Mail Regarding Customs Fees and Opioid Detection Efforts. GAO-20- 340R. Washington, D.C.: February 27, 2020. Medicaid: States' Changes to Payment Rates for Substance Use Disorder Services. GAO-20-260. Washington, D.C.: January 30, 2020. Drug Control: Actions Needed to Ensure Usefulness of Data on Suspicious Opioid Orders. GAO-20-118. Washington, D.C.: January 29, 2020. Opioid Use Disorder: Barriers to Medicaid Beneficiaries’ Access to Treatment Medications. GAO-20-233. Washington, D.C.: January 24, 2020. Social Security Disability: Action Needed to Help Agency Staff Understand and Follow Policies Related to Prescription Opioid Misuse. GAO-20-120. Washington, D.C.: January 9, 2020. Countering Illicit Finance and Trade: U.S. Efforts to Combat Trade-Based Money Laundering. GAO-20-314R. Washington, D.C.: December 30, 2019. Drug Control: The Office of National Drug Control Policy Should Develop Key Planning Elements to Meet Statutory Requirements. GAO-20-124. Washington, D.C.: December 18, 2019. International Mail: Progress Made in Using Electronic Data to Detect Illegal Opioid Shipments, but Additional Steps Remain. GAO-20-229R. Washington, D.C.: December 18, 2019. Counternarcotics: Treasury Reports Some Results from Designating Drug Kingpins, but Should Improve Information on Agencies’ Expenditures. GAO-20-112. Washington, D.C.: December 16, 2019. Mental Health and Substance Use: State and Federal Oversight of Compliance with Parity Requirements Varies. GAO-20-150. Washington, D.C.: December 13, 2019. Veterans Health Care: Services for Substance Use Disorders, and Efforts to Address Access Issues in Rural Areas. GAO-20-35. Washington, D.C.: December 2, 2019. Coast Guard: Assessing Deployable Specialized Forces' Workforce Needs Could Improve Efficiency and Reduce Potential Overlap or Gaps in Capabilities. GAO-20-33. Washington, D.C.: November 21, 2019. Substance Use Disorder: Prevalence of Recovery Homes, and Selected States’ Investigations and Oversight. GAO-20-214T. Washington, D.C.: October 24, 2019. Medicaid: Opioid Use Disorder Services for Pregnant and Postpartum Women, and Children. GAO-20-40. Washington, D.C.: October 24, 2019. Fiscal Year 2019 Reports U.S. Assistance to Central America: Department of State Should Establish a Comprehensive Plan to Assess Progress Toward Prosperity, Governance, and Security. GAO-19-590. Washington, D.C.: September 26, 2019. Science & Tech Spotlight: Opioid Vaccines. GAO-19-706SP. Washington, D.C.: September 16, 2019. U.S. Assistance to Mexico: State and USAID Allocated over $700 Million to Support Criminal Justice, Border Security, and Related Efforts from Fiscal Year 2014 through 2018. GAO-19-647. Washington, D.C.: September 10, 2019. Prescription Opioids: Patient Options for Safe and Effective Disposal of Unused Opioids. GAO-19-650. Washington, D.C.: September 3, 2019. Land Ports of Entry: CBP Should Update Policies and Enhance Analysis of Inspections. GAO-19-658. Washington, D.C.: August 6, 2019. Drug Control: Certain DOD and DHS Joint Task Forces Should Enhance Their Performance Measures to Better Assess Counterdrug Activities. GAO-19-441. Washington, D.C.: July 9, 2019. VA Mental Health: VHA Improved Certain Prescribing Practices, but Needs to Strengthen Treatment Plan Oversight. GAO-19-465. Washington, D.C.: June 17, 2019. Health Centers: Trends in Revenue and Grants Supported by the Community Health Center Fund. GAO-19-496. Washington, D.C.: May 30, 2019. Prescription Opioids: Voluntary Medicare Drug Management Programs to Control Misuse. GAO-19-446. Washington, D.C.: May 20, 2019. Drug Policy: Assessing Treatment Expansion Efforts and Drug Control Strategies and Programs. GAO-19-535T. Washington, D.C.: May 9, 2019. Drug Policy: Preliminary Observations on the 2019 National Drug Control Strategy. GAO-19-370T. Washington, D.C.: March 7, 2019. Behavioral Health: Research on Health Care Costs of Untreated Conditions is Limited. GAO-19-274. Washington, D.C.: February 28, 2019. Security Assistance: U.S. Agencies Should Establish a Mechanism to Assess Caribbean Basin Security Initiative Progress. GAO-19-201. Washington, D.C.: February 27, 2019. Drug Control: DOD Should Improve Its Oversight of the National Guard Counterdrug Program.GAO-19-27. Washington, D.C.: January 17, 2019. Colombia: U.S. Counternarcotics Assistance Achieved Some Positive Results but State Needs to Review the Overall U.S. Approach. GAO-19-106. Washington, D.C.: December 12, 2018. Illegal Marijuana: Opportunities Exist to Improve Oversight of State and Local Eradication Efforts. GAO-19-9. Washington, D.C.: November 14, 2018. Fiscal Year 2018 Reports Opioid Crisis: Status of Public Health Emergency Authorities. GAO-18-685R. Washington, D.C.: September 26, 2018. Adolescent and Young Adult Substance Use: Federal Grants for Prevention, Treatment, and Recovery Services and for Research. GAO-18-606. Washington, D.C.: September 4, 2018. Foster Care: Additional Actions Could Help HHS Better Support States’ Use of Private Providers to Recruit and Retain Foster Families. GAO-18-376. Washington, D.C.: May 30, 2018. VA Health Care: Progress Made Towards Improving Opioid Safety, but Further Efforts to Assess Progress and Reduce Risk Are Needed. GAO-18-380. Washington, D.C.: May 29, 2018. Prescription Opioids: Medicare Needs Better Information to Reduce the Risk of Harm to Beneficiaries. GAO-18-585T. Washington, D.C.: May 29, 2018. Illicit Opioids: Office of National Drug Control Policy and Other Agencies Need to Better Assess Strategic Efforts. GAO-18-569T. Washington, D.C.: May 17, 2018. Substance Use Disorder: Information on Recovery Housing Prevalence, Selected States’ Oversight, and Funding. GAO-18-315. Washington, D.C.: March 22, 2018. Illicit Opioids: While Greater Attention Given to Combating Synthetic Opioids, Agencies Need to Better Assess their Efforts. GAO-18-205. Washington, D.C.: March 29, 2018. Substance-Affected Infants: Additional Guidance Would Help States Better Implement Protections for Children. GAO-18-196. Washington, D.C.: January 19, 2018. Prescription Opioids: Medicare Should Expand Oversight Efforts to Reduce the Risk of Harm. GAO-18-336T. Washington, D.C.: January 17, 2018. Preventing Drug Abuse: Low Participation by Pharmacies and Other Entities as Voluntary Collectors of Unused Prescription Drugs. GAO-18-25. Washington, D.C.: October 12, 2017. Border Patrol: Issues Related to Agent Deployment Strategy and Immigration Checkpoints. GAO-18-50. Washington, D.C.: November 8, 2017. Prescription Opioids: Medicare Needs to Expand Oversight Efforts to Reduce the Risk of Harm. GAO-18-15. Washington, D.C.: October 6, 2017. Opioid Use Disorders: HHS Needs Measures to Assess the Effectiveness of Efforts to Expand Access to Medication-Assisted Treatment. GAO-18-44. Washington, D.C.: October 31, 2017. Counternarcotics: Overview of U.S. Efforts in the Western Hemisphere. GAO-18-10. Washington, D.C.: October 13, 2017. Newborn Health: Federal Action Needed to Address Neonatal Abstinence Syndrome. GAO-18-32. Washington, D.C.: October 4, 2017. Fiscal Year 2017 Reports Anti-Money Laundering: U.S. Efforts to Combat Narcotics-Related Money Laundering in the Western Hemisphere. GAO-17-684. Washington, D.C.: August 22, 2017. Nonviolent Drug Convictions: Stakeholders’ Views on Potential Actions to Address Collateral Consequences. GAO-17-691. Washington, D.C.: September 7, 2017. Medicaid: States Fund Services for Adults in Institutions for Mental Disease Using a Variety of Strategies. GAO-17-652. Washington, D.C.: August 9, 2017. International Mail Security: Costs and Benefits of Using Electronic Data to Screen Mail Need to Be Assessed. GAO-17-606. Washington, D.C.: August 2, 2017. Drug Control Policy: Information on Status of Federal Efforts and Key Issues for Preventing Illicit Drug Use. GAO-17-766T. Washington, D.C.: July 26, 2017. Medicaid Expansion: Behavioral Health Treatment Use in Selected States in 2014. GAO-17-529. Washington, D.C.: June 22, 2017. Border Security: Additional Actions Could Strengthen DHS Efforts to Address Subterranean, Aerial, and Maritime Smuggling. GAO-17-474. Washington, D.C.: May 1, 2017. VA Health Care: Actions Needed to Ensure Medical Facilities’ Controlled Substance Programs Meet Requirements. GAO-17-442T. Washington, D.C.: February 27, 2017. VA Health Care: Actions Needed to Ensure Medical Facility Controlled Substance Inspection Programs Meet Agency Requirements. GAO-17-242. Washington, D.C.: February 15, 2017. Drug Free Communities Support Program: Agencies Have Strengthened Collaboration but Could Enhance Grantee Compliance and Performance Monitoring. GAO-17-120. Washington, D.C.: February 7, 2017. Highlights of a Forum: Preventing Illicit Drug Use. GAO-17-146SP. Washington, D.C.: November 14, 2016. Fiscal Year 2016 Reports Opioid Addiction: Laws, Regulations, and Other Factors Can Affect Medication-Assisted Treatment Access. GAO-16-833. Washington, D.C.: September 27, 2016. Drug Enforcement Administration: Additional Actions Needed to Address Prior GAO Recommendations. GAO-16-737T. Washington, D.C.: June 22, 2016. Controlled Substances: DEA Should Take Additional Actions to Reduce Risks in Monitoring the Continued Eligibility of Its Registrants. GAO-16-310. Washington, D.C.: May 26, 2016. Office of National Drug Control Policy: Progress toward Some National Drug Control Strategy Goals, but None Have Been Fully Achieved. GAO-16-660T. Washington, D.C.: May 17, 2016. Veterans Justice Outreach Program: VA Could Improve Management by Establishing Performance Measures and Fully Assessing Risks. GAO-16-393. Washington, D.C.: April 28, 2016. State Marijuana Legalization: DOJ Should Document Its Approach to Monitoring the Effects of Legalization. GAO-16-419T. Washington, D.C.: April 5, 2016. DOD and VA Health Care: Actions Needed to Help Ensure Appropriate Medication Continuation and Prescribing Practices. GAO-16-158. Washington, D.C.: January 5, 2016. State Marijuana Legalization: DOJ Should Document Its Approach to Monitoring the Effects of Legalization. GAO-16-1. Washington, D.C.: December 30, 2015. Office of National Drug Control Policy: Lack of Progress on Achieving National Strategy Goals. GAO-16-257T. Washington, D.C.: December 2, 2015. Drug Control: Additional Performance Information Is Needed to Oversee the National Guard’s State Counterdrug Program. GAO-16-133. Washington, D.C.: October 21, 2015. Fiscal Year 2015 Reports Medicaid: Additional Reporting May Help CMS Oversee Prescription-Drug Fraud Controls. GAO-15-390. Washington, D.C.: July 8, 2015. Prescription Drugs: More DEA Information about Registrants’ Controlled Substances Roles Could Improve Their Understanding and Help Ensure Access. GAO-15-471. Washington, D.C.: June 25, 2015. Behavioral Health: Options for Low-Income Adults to Receive Treatment in Selected States. GAO-15-449. Washington, D.C.: June 19, 2015. Drug-Impaired Driving: Additional Support Needed for Public Awareness Initiatives. GAO-15-293. Washington, D.C.: February 24, 2015. Prenatal Drug Use and Newborn Health: Federal Efforts Need Better Planning and Coordination. GAO-15-203. Washington, D.C.: February 10, 2015. Medicare Program Integrity: CMS Pursues Many Practices to Address Prescription Drug Fraud, Waste, and Abuse. GAO-15-66. Washington, D.C.: October 24, 2014. Reports from Fiscal Years 1972-2014 Office of National Drug Control Policy: Office Could Better Identify Opportunities to Increase Program Coordination. GAO-13-333. Washington, D.C.: March 26, 2013. Drug Control: Initial Review of the National Strategy and Drug Abuse Prevention and Treatment Programs. GAO-12-744R. Washington, D.C.: July 6, 2012. Prescription Pain Reliever Abuse: Agencies Have Begun Coordinating Education Efforts, but Need to Assess Effectiveness. GAO-12-115. Washington, D.C.: December 22, 2011. Adult Drug Courts: Studies Show Courts Reduce Recidivism, but DOJ Could Enhance Future Performance Measure Revision Efforts. GAO-12-53. Washington, D.C.: December 9, 2011. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotics Efforts, but Tons of Illicit Drugs Continue to Flow into the United States. GAO-07-1018. Washington, D.C.: August 17, 2007. Adult Drug Courts: Evidence Indicates Recidivism Reductions and Mixed Results for Other Outcomes. GAO-05-219. Washington, D.C.: February 28, 2005. Prescription Drugs: OxyContin Abuse and Diversion and Efforts to Address the Problem. GAO-04-110. Washington, D.C.: December 19, 2003. Drug Courts: Better DOJ Data Collection and Evaluation Efforts Needed to Measure Impact of Drug Court Programs. GAO-02-434. Washington, D.C.: April 18, 2002. Drug Abuse: Efforts under Way to Determine Treatment Outcomes. T-HEHS-00-60. Washington, D.C.: February 17, 2000. Emerging Drug Problems: Despite Changes in Detection and Response Capability, Concerns Remain. HEHS-98-130. Washington, D.C.: July 20, 1998. Drug Courts: Overview of Growth, Characteristics, and Results. GGD-97- 106. Washington, D.C.: July 31, 1997. Drug Control: Reauthorization of the Office of National Drug Control Policy. T-GGD-97-97. Washington, D.C.: May 1, 1997. Confronting the Drug Problem: Debate Persists on Enforcement and Alternative Approaches. GGD-93-82. Washington, D.C.: July 1, 1993. War on Drugs: Federal Assistance to State and Local Drug Enforcement. GGD-93-86. Washington, D.C.: April 29, 1993. Drug Control: Coordination of Intelligence Activities.GGD-93-83BR. Washington, D.C.: April 2, 1993. Drug Abuse Prevention: Federal Efforts to Identify Exemplary Programs Need Stronger Design. PEMD-91-15. Washington, D.C.: August 22, 1991. VA Health Care: Inadequate Controls over Addictive Drugs. HRD-91-101. Washington, D.C.: June 6, 1991. The War on Drugs: Arrests Burdening Local Criminal Justice Systems. GGD-91-40. Washington, D.C.: April 3, 1991. Drug Treatment: Targeting Aid to States Using Urban Population as Indicator of Drug Use. HRD-91-17. Washington, D.C.: November 27, 1990. Controlling Drug Abuse: A Status Report. GGD-88-39. Washington, D.C.: March 1, 1988. Drug Abuse Prevention: Further Efforts Needed To Identify Programs That Work. HRD-88-26. Washington, D.C.: December 4, 1987. Comprehensive Approach Needed To Help Control Prescription Drug Abuse. GGD-83-2. Washington, D.C.: October 29, 1982. Action Needed To Improve Management and Effectiveness of Drug Abuse Treatment. HRD-80-32 Washington, D.C.: April 14, 1980. Identifying and Eliminating Sources of Dangerous Drugs: Efforts Being Made, but Not Enough. B-175425. Washington, D.C.: Jun 7, 1974. United States Efforts to Increase International Cooperation in Controlling Narcotics Trafficking. B-176625. Washington, D.C.: October 4, 1972. Efforts to Prevent Dangerous Drugs from Illicitly Reaching the Public. B- 175425. Washington, D.C.: April 17, 1972. Appendix II: GAO Contacts and Staff Acknowledgments GAO Contacts Triana McNeil at (202) 512-8777 or McNeilT@gao.gov; Mary Denigan- Macauley at (202) 512-7114 or DeniganMacauleyM@gao.gov; or Jacqueline M. Nowicki at (617) 788-0580 or NowickiJ@gao.gov. Staff Acknowledgments In addition to the contacts named above, Alana Finley (Assistant Director), Bill Keller (Assistant Director), Will Simerl (Assistant Director), Meghan Squires (Analyst-in-Charge), James Bennett, Ben Bolitzer, Breanne Cave, Billy Commons, Holly Dye, Wendy Dye, Brian Egger, Kaitlin Farquharson, Sally Gilley, Sarah Gilliland, Mara McMillen, Amanda Miller, Sean Miskell, Jan Montgomery, Dae Park, Bill Reinsberg, Emily Wilson Schwark, Herbie Tinsley, and Sirin Yaemsiri made key contributions to this report. Key contributors to the prior work discussed in this report are listed in each respective product.
Why GAO Did This Study Drug misuse—the use of illicit drugs and the misuse of prescription drugs—has been a persistent and long-standing public health issue in the United States. Ongoing drug control efforts seek to address drug misuse through education and prevention, addiction treatment, and law enforcement and drug interdiction, as well as programs that serve populations affected by drug misuse. These efforts involve federal, state, local, and tribal governments as well as community groups and the private sector. In recent years, the federal government has spent billions of dollars and has enlisted more than a dozen agencies to address drug misuse and its effects. This report provides information on (1) trends in drug misuse (2) costs and other effects of drug misuse on society and the economy, and (3) challenges the nation faces in addressing the drug crisis. GAO analyzed nationally representative federal data on drug misuse and deaths from overdoses for 2002–2018 (the most recent available); reviewed selected empirical studies published from 2014–2019; and compared GAO's High-Risk list criteria to findings and recommendations in over 75 GAO reports issued from fiscal year 2015 through March 2020. What GAO Found Nationally, since 2002, rates of drug misuse have increased, according to GAO's analysis of federal data. In 2018, the Substance Abuse and Mental Health Services Administration reported that an estimated 19 percent of the U.S. population (over 53 million people) misused or abused drugs, an increase from an estimated 14.7 percent in 2003. People across a broad range of demographic groups—including sex, race or ethnicity, education levels, employment status, and geographic categories—reported misusing drugs (see figure below). The rates of drug overdose deaths have also generally increased nationally since the early 2000s. Over 716,000 people have died of a drug overdose since 2002, and in 2018 alone, over 67,000 people died as a result of a drug overdose, according to the Centers for Disease Control and Prevention. Although the number of drug overdose deaths in 2018 decreased compared to 2017, drug misuse in the United States continued to rise. Rates of drug overdose deaths varied in counties across the nation in 2003 and 2017, the most recent year that county-level data were available (see figure below). In 2017, 43.2 percent of counties had estimates of more than 20 drug overdose deaths per 100,000 people, including 448 counties with rates that were significantly higher than this amount. Note: CDC's National Center for Health Statistics used a statistical model to estimate rates of drug overdose deaths to account for counties where data were sparse because of small population size. GAO work and other government and academic studies have found that the negative health and societal effects of drug misuse are widespread and costly—for example, the increased need for health care, human services, and special education; increased crime, childhood trauma, reduced workforce productivity; and loss of life. The federal government is making progress in some areas, but a strategic, coordinated, and effective national response—with key sustained leadership from federal agencies—is needed. This report identifies opportunities to strengthen the federal government's efforts to address this persistent and increasing problem. These opportunities include addressing challenges in providing sustained leadership and strengthened coordination; the necessary capacity to address the crisis; and systems to measure, evaluate, and demonstrate progress. For example: the Office of National Drug Control Policy should ensure future iterations of the National Drug Control Strategy include all statutorily required elements. Examples of statutorily required elements include a 5-year projection for the National Drug Control Program and budget priorities; a description of how each of the Strategy's long-range goals will be achieved, including estimates of needed federal resources; and performance evaluation plans for these goals, among other requirements; the Office of National Drug Control Policy should ensure effective, sustained implementation of the 2020 Strategy and future strategies; the Department of Health and Human Services should provide guidance to states for the safe care for infants born with prenatal drug exposure, who may be at risk for child abuse and neglect; the Drug Enforcement Administration should take steps to better analyze and use drug transaction data to prevent diversion of prescription opioids to be sold illegally; and the State Department should develop and implement a data management system for all Caribbean Basin Security Initiative activities to reduce illicit drug trafficking or track data trends across countries. In GAO's March 2019 High-Risk report, GAO named drug misuse as an emerging issue requiring close attention. Based on 25 GAO products issued since that time and this update, GAO has determined that this issue is high risk. Moreover, the severe public health and economic effects of the Coronavirus Disease 2019 (COVID-19) pandemic could fuel some of the contributing factors of drug misuse, such as unemployment—highlighting the need to sustain and build upon ongoing efforts. However, maintaining sustained attention on preventing, responding to, and recovering from drug misuse will be challenging in the coming months, as many of the federal agencies responsible for addressing drug misuse are focused on addressing the pandemic. Therefore, GAO will include this issue in the 2021 High-Risk Series update and make the high-risk designation effective at that time. What GAO Recommends Since fiscal year 2015, GAO has made over 80 recommendations to multiple agencies responsible for addressing the drug crisis; over 60 of these recommendations have yet to be implemented.
gao_GAO-19-416
gao_GAO-19-416_0
Background Noncitizens in the Military In most cases, a noncitizen must be a LPR to enlist in the U.S. Armed Forces. Special provisions of the INA authorize the naturalization of current and recently discharged service members. Qualifying military service includes active or reserve service in the U.S. Army, Navy, Marine Corps, Air Force, Coast Guard, or service in a National Guard unit. A person who has served honorably in the U.S. Armed Forces for 1 year during peacetime may be eligible to apply for naturalization. In addition, during designated periods of hostilities, such as World War I and World War II and the current global war on terrorism, members of the U.S. Armed Forces who serve honorably in an active duty status, or as members of the Selected Reserve of the Ready Reserve, are eligible to apply for naturalization without meeting any minimum required period of service. DOD determines if a service member meets the qualifying service requirement by certifying Form N-426, Request for Certification of Military or Naval Service, or by issuing Forms DD-214, Certificate of Release or Discharge from Active Duty, NGB-22, National Guard Report of Separation and Record of Service, or an equivalent discharge document. The information provided in those forms determines whether or not the service member completed all requirements for honorable service, including whether the service member served honorably and, if he or she has separated from service, whether his or her separation was under honorable conditions. In order to naturalize, a member of the U.S. Armed Forces must also meet the requirements and statutory qualifications to become a citizen. Specifically, he or she must demonstrate good moral character and have sufficient knowledge of the English language, U.S. government, and history. Additionally, an applicant must show attachment to the principles of the Constitution and favorable disposition toward the good order and happiness of the United States. However, qualified members of the U.S. Armed Forces are exempt from other naturalization requirements, including application fees and requirements for continuous residence and physical presence in the United States. DOD also has authority to expand military recruiting to certain nonimmigrants and other lawfully present aliens. Beginning in December 2008, the Military Accessions Vital to the National Interest (MAVNI) program allowed certain U.S. nonimmigrant visa holders, asylees, refugees, and individuals with Temporary Protected Status to enlist in the military if they possessed medical, language, and other types of skills deemed vital for military operations. DOD ended the MAVNI program in fiscal year 2016, citing counterintelligence concerns. Between 2008 and 2016, 10,400 individuals enlisted in the U.S. military through the MAVNI program, according to DOD data. Immigration Enforcement DHS is responsible for arresting, detaining, litigating charges of removability against, and removing foreign nationals who are suspected and determined to be in the United States in violation of U.S. immigration laws. Trial attorneys from ICE’s OPLA represent the U.S. government as civil prosecutors in immigration court removal proceedings. ICE’s ERO is responsible for arresting and detaining potentially removable foreign nationals pending the outcome of their immigration court cases and removing individuals subject to an immigration judge’s final order of removal. ICE’s HSI is responsible for investigating a range of domestic and international activities arising from the illegal movement of people and goods into, within, and out of the United States. Individuals may be subject to removal for a wide variety of reasons, including entering the United States illegally, staying longer than their authorized period of admission, being convicted of certain crimes, or engaging in terrorist activity. LPRs are foreign nationals under U.S. immigration law and therefore may be subject to immigration enforcement and removal from the United States for reasons such as controlled substance violations or conviction of an aggravated felony. Both HSI agents and ERO officers may encounter potentially removable individuals and are to decide whether to issue them a charging document, known as a NTA, ordering the individual to appear before an immigration judge to respond to removal charges. If the judge finds that the respondent is removable and not otherwise eligible for relief, the judge will issue an order of removal, subjecting the respondent to removal by ERO once the order is administratively final. VA Benefits and Services The VA is responsible for administering benefits and services, such as health care and disability compensation, to veterans in the United States and abroad, including veterans who have been removed from the United States. VA pays monthly disability compensation to veterans for disabilities caused or aggravated by military service, known as service- connected disabilities. Veterans with service-connected disabilities may also be eligible for other VA benefits and services, such as job training. VA staff in regional offices process disability compensation claims. After a veteran submits a disability claim to VA, a VA Veterans Service Representative reviews the claim and assists the veteran with gathering relevant evidence, such as military service records, medical examinations, and treatment records from VA medical facilities and private providers. If necessary to provide support to substantiate the claim, VA will provide a medical examination, known as a Compensation and Pension (C&P) exam, to obtain evidence of the veteran’s disabilities and their connection to military service. Within the United States, medical providers who work for the Veterans Health Administration often conduct these exams. VA also contracts with private firms to perform these exams. Outside the United States, VA contracts with private firms to perform exams in 33 countries. In countries where VA contractors do not perform exams, VA coordinates with State staff at embassies and consulates to schedule exams with private providers. Once VA receives the claim evidence, a Rating Veterans Service Representative evaluates the claim and determines whether the veteran is eligible for benefits, and if so, assigns a percentage rating. After a rating is assigned, VA provides VSO staff assisting a veteran with a claim up to 48 hours to review the claim decision prior to finalizing the decision. A Veterans Service Representative then determines the amount of the award, if any, and drafts a decision notice. A senior Veterans Service Representative then reviews and authorizes the award for release to the veteran. See figure 1 for details on the 5 phases of VA’s disability compensation claims process. From fiscal years 2013 through 2018, VA received over 8.9 million disability compensation claims from over 3.9 million veterans and awarded over $20.2 billion in benefits, according to VA data. ICE Does Not Consistently Adhere to Its Policies for Handling Cases of Potentially Removable Veterans and Does Not Consistently Identify and Track Such Veterans ICE Has Developed Policies for Handling Cases of Potentially Removable Veterans, but Does Not Consistently Adhere to Those Policies ICE has developed policies that govern the handling of cases involving potentially removable veterans. When HSI agents and ERO officers learn that they have encountered a veteran, these policies require they conduct additional assessments, create additional documentation, and obtain management approval in order to proceed with the case. Specifically, in June 2004, ICE’s Acting Director of Investigations issued a memo giving the HSI Special Agent in Charge (SAC) in each field office the authority to approve issuance of a NTA in cases involving current service members or veterans. Similarly, in September 2004, ICE’s Acting Director of Detention and Removal Operations issued a memo giving the ERO Field Office Director (FOD) in each field office the authority to approve issuance of a NTA in cases involving current service members or veterans. In order to issue a NTA to a veteran, the SAC and FOD must consider, at a minimum, the veteran’s overall criminal history, evidence of rehabilitation, family and financial ties to the United States, employment history, health, and community service. The SAC and FOD must also consider factors related to the veteran’s military service, such as duty status (active or reserve), assignment to a war zone, number of years in service, and decorations awarded. To authorize issuance of the NTA, the SAC and FOD are to complete a memo to include in the veteran’s alien file and update ICE’s EARM database with a brief overview of the facts considered. Additionally, in November 2015, ICE’s Director issued a policy establishing ICE’s procedures for investigating the potential U.S. citizenship of individuals encountered by ICE. The policy states that prior military service is one of several indicators that an individual could be a U.S. citizen. Therefore, before issuing a NTA to a veteran or anyone with an indicator of potential U.S. citizenship, the ICE component that first encounters the individual (either HSI or ERO) is to conduct a factual examination, legal analysis, and a check of all available DHS systems, such as USCIS’s Person-Centric Query Service, to assess whether the individual is a U.S. citizen. ERO or HSI (whichever conducted the factual examination) and OPLA’s Office of Chief Counsel must jointly prepare a memorandum that assesses the individual’s citizenship status and recommends a course of action, then submit that memorandum for review and approval by ICE headquarters. The policy also requires that a copy of the memorandum be placed in the individual’s alien file. Our analysis of removed veterans’ alien files found that ICE does not consistently follow these policies. Specifically, ICE policies require agents and officers to document the decision to issue a NTA to a veteran, but do not require agents and officers to identify and document veteran status when interviewing potentially removable individuals. Our analysis found that ICE did not satisfy the 2004 requirement for FOD approval in 18 of 87 (21 percent) cases that OPLA’s check box indicated involved veterans who were placed into removal proceedings and ERO data indicated had been removed from fiscal years 2013 through 2018. Our analysis also found that ICE did not meet the requirements of the 2015 policy requiring elevation to headquarters in 26 of the 37 cases (70 percent) of the cases for which the policy applied. Further, in December 2018 HSI officials told us that HSI has not been adhering to either the 2004 or the 2015 policies because they were unaware of the policies prior to our review. HSI officials stated that they do not distinguish between veterans and nonveterans when conducting administrative or criminal investigations or when deciding whether to issue a NTA. ERO officials stated that the absence of documentation in the alien file does not necessarily indicate that officers did not adhere to the policies; however, as noted above, the policies specifically require ICE to add documentation to the alien file. Because ICE did not consistently follow these policies, some veterans who were removed may not have received the level of review and approval that ICE has determined is appropriate for cases involving veterans. Taking action to ensure consistent implementation of its policies for handling cases of potentially removable veterans, such as issuing guidance or providing training, would help ICE better ensure that potentially removable veterans receive appropriate levels of review and consideration prior to the initiation of removal proceedings. ICE Has Not Developed a Policy to Identify and Document All Military Veterans It Encounters ICE has not developed a policy to identify and document all military veterans it encounters. According to ERO officials, when ERO officers encounter an individual, they interview that individual and complete the Form I-213, “Record of Deportable/Inadmissible Alien,” which documents information on, among other things, the individual’s country of citizenship and most recent employer. Officials stated that ERO officers would generally learn about the individual’s veteran status during that interview. However, ICE does not have a policy requiring agents and officers to specifically ask about and document veteran status. According to ERO officials, ERO does not need such a policy because ERO’s training for new officers, the Basic Immigration Enforcement Training Program, instructs officers to ask about veteran status when interviewing potentially removable aliens. The Basic Immigration Enforcement Training Program includes one lesson plan and one practice exercise stating that the I-213 “Record of Deportable/Inadmissible Alien” should include information on military service, as applicable. The lesson plan also includes a list of mandatory questions that ERO officers must ask in every encounter with an alien; however, that list of mandatory questions does not include any questions about military service. Further, the I-213 “Record of Deportable/Inadmissible Alien” does not have a specific field to indicate veteran status, and ERO’s cover sheet that supervisors use to review the legal sufficiency of NTAs does not contain information about veteran status. For cases processed by HSI, HSI officials stated that agents would generally learn about the individual’s veteran status through the initial interview or through background checks or other information obtained in the course of an HSI investigation. However, during the course of our review, HSI officials stated that there was no policy requiring agents to ask about or document veteran status because, as discussed above, HSI does not handle veterans’ cases differently from other cases. Without mechanisms in place to identify and document veterans, ICE is not positioned to determine whether or not individuals it encounters are potentially veterans and for which individuals the 2004 and 2015 policies discussed above for handling cases of potentially removable veterans should be applied. Standards for Internal Control in the Federal Government state that management should design control activities—that is, the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve the entity’s objectives. ICE officials told us that the 2004 and 2015 policies are intended to provide guidance and direction to ICE agents and officers for handling cases of potentially removable veterans. ICE officials believe that these policies could be updated with additional guidance to agents and officers to ask about and document veteran status during interviews of potentially removable individuals. Without developing and implementing a new policy or revising its 2004 and 2015 policies to require agents and officers to ask about and document veteran status, ICE has no way of knowing whether it has identified all of the veterans it has encountered and, therefore, does not have reasonable assurance that it is consistently implementing its policies and procedures for handling veterans’ cases. ICE Does Not Maintain Complete Electronic Data on Veterans Who Have Been Placed in Removal Proceedings or Removed Because ICE has not developed a policy to identify and document all military veterans it encounters, ICE does not maintain complete electronic data on veterans who have been placed in removal proceedings or removed. In the instances in which ICE agents and officers learn that they have encountered a veteran, none of the three ICE components who encounter veterans—ERO, OPLA, and HSI—maintain complete electronic data on the veterans they identify. ERO does not have a specific field for tracking veterans in its database, EARM. According to ERO officials, ERO officers can note veteran status on the Form I-213, “Record of Deportable/Inadmissible Alien,” but ERO does not have the ability to electronically search those notes to identify all of the veterans it has encountered. ERO officials stated that they do not maintain data on veteran status because they do not specifically target veterans for enforcement operations. OPLA has a check box tracking veteran status in its database, PLAnet, but the field is not mandatory. PLAnet also includes a case notes section, where an OPLA attorney may choose to document veteran status information. OPLA officials stated that the reliability of the veteran status box and case notes depends on the diligence of the attorney inputting the case information into PLAnet. HSI officials stated that they do not track veteran status at all because, as discussed above, veteran status does not affect their handling of cases. Our analysis of removed veterans’ alien files identified limitations with the only electronic data on veteran status ICE maintains—OPLA’s check box in the PLAnet database. Specifically, though OPLA’s check box indicated that all 87 of the aliens whose files we reviewed were veterans, we found that 8 of the 87 individuals (9 percent) did not serve in the U.S. Armed Forces, according to the information in their alien files. After reviewing these cases, OPLA officials stated that the individuals were incorrectly designated as veterans due to human error. OPLA officials stated that OPLA does not require attorneys to systematically track veteran status information in PLAnet because the database is not intended to be a data repository, but rather serves as a case management system for OPLA attorneys. OPLA officials stated that the official record of the alien’s case is the paper alien file. Because ICE does not maintain complete electronic data on potentially removable veterans it encounters, ICE does not know exactly how many veterans have been placed in removal proceedings or removed, or if their cases have been handled according to ICE’s policies. Standards for Internal Control in the Federal Government state that management uses quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. Quality information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. While tracking veteran status in the paper alien file may allow ICE to review whether a specific individual is a veteran, it does not provide the type of complete and accessible electronic data that would allow the agency to systematically evaluate its performance in adhering to its policies. Maintaining complete electronic data on veterans it encounters would assist ICE in determining the extent to which the agency has adhered to its policies for handling cases involving potentially removable veterans. For example, ICE could obtain quality information through a mandatory field, such as a check box to track veteran status. Available Data Indicate that Approximately 250 Veterans Were Placed in Removal Proceeding or Removed from the United States from Fiscal Years 2013 through 2018 Based on the limited information available in OPLA’s PLAnet database, approximately 250 veterans were placed in removal proceedings or removed from the United States from fiscal years 2013 through 2018. As noted above, ICE does not maintain complete electronic data on veterans it encounters. While OPLA’s PLAnet includes some data on veterans who have been placed in removal proceedings, because the entry of veteran status data in PLAnet is not mandatory, there could be additional veterans who were placed in removal proceedings or removed during the timeframe of our review who were not noted in PLAnet or included in our analysis, as discussed below. We reviewed the data that were included in PLAnet on veterans who were placed in removal proceedings from fiscal years 2013 through 2018 and identified approximately 250 military veterans. This includes those individuals for whom the check box indicating veteran status was checked in PLAnet but, as noted above, does not represent complete data on all possible veterans placed in removal proceedings during the time period we reviewed. Among the approximately 250 individuals who were noted in PLAnet as veterans in removal proceedings, the most common countries of nationality were Mexico (about 40), Jamaica (about 30), El Salvador (about 10), Trinidad and Tobago (about 10), Germany (about 10), and Guatemala (about 10). At the end of fiscal year 2018, about 115 had been ordered removed, about 25 had been granted relief or protection from removal by an immigration judge, and about 5 had their cases administratively closed. The remainder of the cases were still open as of November 2018. From fiscal year 2013 through 2018, ERO had removed 92 of the approximately 250 military veterans from the United States, of which 90 were foreign nationals with one or more criminal convictions, according to ERO data. Nine of the removed veterans had service-connected disabilities recognized by VA, including four removed veterans who had service-connected post-traumatic stress disorder. Based on our review of the alien files of 87 of the individuals that OPLA’s check box indicated were veterans and ERO indicated had been removed, we identified the following characteristics: 26 veterans (30 percent) received an honorable discharge; 26 (30 percent) received a general discharge under honorable conditions; 13 (15 percent) received an other than honorable discharge; 8 (9 percent) received an uncharacterized discharge; 3 (3 percent) received a bad conduct discharge; 2 (2 percent) received a dishonorable discharge; 8 (9 percent) had no evidence of military service in their alien file; and 1 (1 percent) did not have a discharge characterization listed in the alien file. 74 veterans (85 percent) were LPRs, 6 (7 percent) were citizens of the Marshall Islands, the Federated States of Micronesia, and Palau who enlisted under the Compact of Free Association, 6 (7 percent) did not have evidence of lawful status, and 1 (1 percent) was a recipient of Deferred Action for Childhood Arrivals. 26 veterans (30 percent) had previously applied for naturalization with USCIS; 3 of whom submitted multiple applications. Seventeen of those naturalization applications were denied by USCIS, 9 were administratively closed, and 2 were withdrawn. 68 veterans (78 percent) were ordered removed because of at least one aggravated felony conviction, while the remaining 19 (22 percent) were ordered removed for non-aggravated felony convictions. Of the convictions ICE cited on the 87 veterans’ NTAs: 32 veterans had drug-related convictions; 20 had convictions related to sexual abuse, of which 18 involved minors; 21 had convictions related to homicide, assault, or attempted homicides or assaults; 16 had theft-related convictions; and 9 had convictions related to firearms, explosives, or explosive material. USCIS and DOD Have Policies Facilitating the Naturalization of Noncitizen Service Members and Veterans; the Number of Service Members Applying for Naturalization Has Decreased USCIS and DOD Have Policies Facilitating the Naturalization of Noncitizen Service Members and Veterans USCIS and DOD have policies facilitating the naturalization of noncitizen service members and veterans, and both agencies provide informational resources to noncitizen service members seeking naturalization. USCIS facilitates the application and naturalization process for current and recently discharged members of the U.S. Armed Forces through a dedicated Military Naturalization Unit, which processes military naturalization applications and assists field officers with administrative naturalization tasks overseas, among other things. USCIS interviews and naturalizes active-duty service members abroad at certain U.S. embassies, consulates, and military installations. To provide information to noncitizen service members and veterans, USCIS maintains a toll-free “Military Help Line” and an e-mail box exclusively for members of the military and their families and publishes an “Immigration 101” presentation for relevant stakeholders, including DOD personnel on military bases. In addition, USCIS provides DOD with a checklist of required documents for military naturalization applications and communication guidelines for naturalization application inquiries, according to USCIS officials. DOD determines whether a service member meets the qualifying service requirement for naturalization by certifying whether the service member has served “honorably,” and if he or she has separated from service, whether their separation was under honorable conditions. Additionally, according to DOD officials, every military installation generally designates a naturalization advisor within its Legal Services Office. The advisor, among other things, assists service members with preparation of their naturalization application packets and serves as an intermediary with USCIS staff. For example, at many Army installations, the Army Community Services Office typically performs this function. The Number of Noncitizen Service Members Applying for Naturalization Declined by 72 Percent from Fiscal Years 2017 to 2018 Although USCIS approved military naturalization applications at a fairly consistent rate from fiscal years 2013 through 2018, the number of applications received declined sharply from fiscal years 2017 to 2018, resulting in a decrease in the number of service members approved for naturalization in fiscal year 2018. From fiscal years 2013 through 2018, USCIS received 54,617 military naturalization applications; USCIS approved 46,835 (86 percent) and denied 3,410 (6 percent). Applicants’ most common countries of nationality were the Philippines (6,267 or 11 percent), Mexico (5,760 or 11 percent), Jamaica (3,510 or 6 percent), China (3,213 or 6 percent), and the Republic of Korea (2,982 or 5 percent). While the number of military naturalization applications was relatively stable between fiscal years 2013 and 2017, applications declined by 72 percent from fiscal year 2017 to fiscal year 2018, from 11,812 in fiscal year 2017 to 3,291 in fiscal year 2018, as shown in figure 2. As a result of this decline in applications, the number of service members approved for naturalization also declined, from 7,303 in fiscal year 2017 to 4,309 in fiscal year 2018. USCIS and DOD officials attributed the decline in military naturalization applications to several DOD policy changes. First, DOD suspended the MAVNI program in September 2016, which reduced the number of noncitizens joining the military. According to DOD officials, due to counterintelligence concerns, DOD suspended the program at the end of fiscal year 2016 and decided not to renew the program in fiscal year 2017. Second, in October 2017, DOD issued policies expanding background check requirements for LPR and MAVNI recruits. The policies specify that LPRs must complete a background check and receive a favorable military service suitability determination prior to entering any component of the U.S. Armed Forces. According to DOD officials, due to backlogs in the background check process, these new recruits were delayed in beginning their service, and officials stated that it may take DOD up to a year to complete enhanced requirements for certain recruits. DOD officials stated that they believe background check backlogs will decrease by the end of fiscal year 2019 and, as a result, the number of noncitizen service members eligible to apply for naturalization will increase. Third, in October 2017, DOD increased the amount of time noncitizens must serve before DOD will certify their honorable service for naturalization purposes. Under the new policy, noncitizens must complete security screening, basic military training, and serve 180 days for a characterization of service determination. Previously, DOD granted that determination in as little as a few days of service. USCIS made several changes to its military naturalization processes in response to or in tandem with DOD’s policy changes. First, in July 2017, USCIS determined that the completion of DOD background checks was relevant to MAVNI recruits’ eligibility for naturalization. USCIS thus began requiring currently-serving MAVNI recruits seeking military naturalization to complete all required DOD background checks before USCIS interviewed them, approved their applications, or administered the Oath of Allegiance to naturalize them. Second, in January 2018, USCIS ended its initiative to naturalize new enlistees at basic training sites. This initiative, known as the “Naturalization at Basic Training Initiative”, began in August 2009 as an effort to conduct outreach to new enlistees at the Army’s five basic training sites and provide noncitizen enlistees an opportunity to naturalize prior to completion of basic training. Because of DOD’s October 2017 policy change increasing the amount of time noncitizens must serve before they are eligible for a characterization of service determination, noncitizen service members no longer meet the requirements for naturalization while they are completing basic training. As a result, USCIS closed naturalization offices in Fort Sill, Fort Benning, and Fort Jackson. USCIS’s processing time for military naturalizations also increased, from an average of 5.4 months in fiscal year 2017 to 12.5 months in fiscal year 2018, according to USCIS data. USCIS officials attributed this increase to the backlog in DOD background checks for MAVNI recruits, as well as an increased volume of naturalization applications from non-military applicants. Removal Alone Does Not Affect Eligibility for VA Benefits and Services, but Living Abroad Affects Eligibility and Access to Certain Benefits and Services Removal Alone Does Not Affect Eligibility for VA Benefits and Services; Veterans Living Abroad are Eligible for Fewer Benefits and Services than Those Living In the United States Citizenship status, including immigration enforcement or removal history, does not affect a veteran’s eligibility for VA benefits and services, according to VA officials. As a result, veterans who have been removed by ICE are entitled to the same VA benefits and services as any other veteran living abroad. Although being removed for violation of immigration law does not in and of itself affect eligibility for VA benefits and services, living abroad affects eligibility for certain benefits and services, as shown in table 1. These differences pertain to all veterans living abroad, including both veterans who have been removed by ICE and veterans who choose to reside abroad. Removed veterans may face additional obstacles in receiving certain benefits for which they are otherwise eligible because they may be barred from traveling to the United States. For example, a removed veteran may not be able to attend a hearing to appeal a VA disability rating decision because VA conducts those hearings exclusively in the United States. Additionally, a removed veteran may not be able to obtain certain Vocational Rehabilitation and Employment services if the veteran is unable to travel to the United States for medical referrals and case management. Veterans Living Abroad Face Challenges Accessing Certain Benefits and Services Veterans living abroad, including removed veterans, may experience challenges accessing certain benefits and services, including slower disability claim processing and Foreign Medical Program (FMP) claim reimbursement, difficulties related to the scheduling and quality of C&P exams, and difficulties communicating with VA. Claims and Reimbursement Processing Timeliness According to VA officials, VA’s processing time for disability compensation claims for veterans living abroad (foreign claims) has improved since fiscal year 2013. For example, in fiscal year 2013, VA processed foreign claims in an average of 521 days and in fiscal year 2018, VA’s processing time for foreign claims decreased to an average of 131 days. However, as of September 2018, VA was not meeting its timeliness goal of 125 days for processing foreign claims and VA took an average of 29 days longer to process foreign claims than domestic claims. VA officials attributed the longer processing times for foreign claims to unreliable foreign mail systems and issues with retrieving and translating foreign records, among other things. From fiscal years 2013 through 2018, VA received disability compensation claims from 26,858 veterans living abroad and awarded over $85 million in benefits, according to VA data. According to VA officials, VA’s processing time for health care claims reimbursements to veterans or their medical providers for treatment of service-connected conditions through FMP has also improved. For example, in October 2018, FMP was processing 53.8 percent of claims in 40 days compared to 70 percent of claims in 40 days in March 2019. However, as of March 2019, VA was not meeting its timeliness goal to process 90 percent of claims reimbursements through FMP in 40 days. FMP officials attributed these delays to the loss of four staff positions in April 2017, as well as FMP assuming responsibility for claims from the Philippines in October 2017. To improve FMP’s processing timeliness, FMP officials stated that VA funded three new full-time equivalent positions for fiscal year 2019. From fiscal years 2013 through 2018, VA reported receiving 373,916 claims reimbursements from veterans and providers living abroad and awarding over $169 million in claims reimbursements. Scheduling and Quality of C&P Exams According to both VA and VSO officials, veterans living abroad, including removed veterans, face challenges related to the scheduling and quality of C&P exams. As previously noted, veterans living abroad do not receive C&P exams from VA medical providers, but may receive exams from either a VA contractor or, in countries where VA does not have contractors, from a private provider scheduled by the U.S. embassy or consulate. From fiscal years 2013 through 2018, VA completed over 27,000 exams abroad through contractors and 6,800 exams through U.S. embassies and consulates, according to VA data. For contract exams, as of March 2019, VA had contractors in 33 countries and U.S. territories. This included Mexico, Germany, Belize, Canada, the Dominican Republic, the Federated States of Micronesia, the United Kingdom, the Philippines, Thailand, Costa Rica, Korea, and Poland, which were among the most common countries of nationality for removed veterans in our analysis. VA officials stated that contract C&P exam locations are determined by historical and pending claims data. Moreover, VA contractors abroad are generally located near military installations or areas in which VA determined there is a large veteran population. For embassy-scheduled exams, both VA and VSO officials told us that the effectiveness of coordination between VA and the embassies varies by country. For example, VA staff told us that they have been unable to schedule exams through embassies in Iraq or Afghanistan. State officials told us that processes for scheduling C&P exams and communicating with VA vary depending on the location, activity, and size of the embassy or consulate. State officials also told us that access to specialized providers to conduct exams, including mental health or audio exams, depends on the location of the embassy or consulate. In addition, both VA and VSO officials told us that veterans who receive embassy-scheduled exams from private providers abroad may receive lower-quality exams than veterans who live in the United States. For example, providers abroad may misinterpret VA exam requirements due to language barriers or unfamiliarity with U.S. medical terminology. These providers also do not have access to veterans’ service records, and therefore cannot assess whether a particular condition is service- connected. For these reasons, VA officials told us that VA staff submit C&P exams completed by private providers abroad to the VA Medical Center in Pittsburgh, Pennsylvania for an additional medical opinion. According to VA officials, VA is improving the scheduling and quality of C&P exams by expanding the number of countries where veterans may receive exams from VA contractors. Veterans Living Abroad Face Challenges Communicating with VA According to VA and VSO officials, veterans living abroad experience challenges communicating with the VA. For example, staff from all four VSOs we interviewed stated that unreliable foreign mail systems and differences in time zones make it challenging for veterans to communicate with the VA, particularly because VA uses paper mail to communicate with veterans living abroad. In addition, VA and VSO officials also told us that veterans living abroad may face challenges applying for and managing their benefits through an online portal maintained by VA and DOD, eBenefits. VA requires veterans to register for a “premium account” in order to access all of the functions of eBenefits, such as applying for benefits online and checking the status of a claim, among other things. To be eligible for a “premium account,” veterans must first verify their identity with DOD. If the veteran provides valid government identification (e.g. driver’s license) and documentation of a financial account (e.g. checking account), DOD may be able to verify the veteran’s identity through an online registration process and VA may be able to verify the veteran’s identity by telephone. If a veteran is unable to verify their identity in this manner, the veteran must verify their identity in-person at a VA regional office in the United States. Therefore, removed veterans who cannot travel to the United States would not be able to obtain a “premium account” if they had not previously registered prior to their removal. VA officials stated that these processes are intended to ensure compliance with National Institute of Standards and Technology guidance for online credentialing. Conclusions Throughout U.S. history, noncitizens have contributed to the United States through service in the Armed Forces. Through its policies, ICE has established that these noncitizen veterans warrant special consideration in the event that they become subject to immigration enforcement and removal from the United States. However, because ICE did not consistently adhere to these policies, some veterans who were removed may not have received the level of review and approval that ICE has determined is appropriate for cases involving veterans. Moreover, without developing and implementing a new policy or revising its 2004 and 2015 policies to require ICE agents and officers to ask about and document veteran status while interviewing potentially removable individuals, ICE has no way of knowing whether it has identified all of the veterans it has encountered and, therefore, does not have reasonable assurance that it is consistently implementing its policies and procedures for handling veterans’ cases. Further, maintaining complete electronic data on veterans it encounters would also allow ICE to better assess whether ICE has adhered to its policies for handling cases involving potentially removable veterans. Recommendations for Executive Action We are making the following three recommendations to ICE: The Director of ICE should take action to ensure consistent implementation of ICE’s policies for handling cases of potentially removable veterans. (Recommendation 1) The Director of ICE should develop and implement a policy or revise its current polices to ensure that ICE agents and officers identify and document veteran status when interviewing potentially removable individuals. (Recommendation 2) The Director of ICE should collect and maintain complete and electronic data on veterans in removal proceedings or who have been removed. (Recommendation 3) Agency Comments and Our Evaluation We provided a copy of this report to DHS, VA, DOD, and State for review and comment. DHS provided written comments, which are reproduced in full in appendix I and discussed below. DHS, VA, and DOD also provided technical comments, which we incorporated as appropriate. State indicated that it did not have any comments on the draft report. In its comments, DHS concurred with our three recommendations and described actions planned to address them. With respect to our first recommendation that ICE should ensure consistent implementation of its policies for handling potentially removable veterans, DHS concurred stating that ICE plans, among other things, to update its guidance and training materials to include information about military service. With respect to our second recommendation that ICE should develop and implement a policy or revise its current policies to ensure agents and officers identify and document veteran status when interviewing potentially removable individuals, DHS concurred, stating that ICE plans to review and clarify existing guidance on the issuance of NTAs to veterans. DHS also concurred with our third recommendation that ICE collect and maintain complete and electronic data on veterans in removal proceedings or who have been removed. Specifically, DHS stated that ICE plans to add data elements for veteran status to its existing systems. The actions described above, if implemented effectively, should address the intent of our recommendations. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, the Secretary of Veterans Affairs, the Acting Secretary of Defense, the Secretary of State, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-8777 or gamblerr@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contacts and Staff Acknowledgements Appendix II: GAO Contacts and Staff Acknowledgements Error! No text of specified style in document. GAO Contact Staff Acknowledgments In addition to the contact named above, Meg Ullengren (Assistant Director), Ashley Davis, Eric Hauswirth, Khaki LaRiviere, Sasan J. “Jon” Najmi, Claire Peachey, Mike Silver, Natalie Swabb, and James Whitcomb made key contributions to this report.
Why GAO Did This Study Throughout U.S. history, noncitizens have served in the U.S. Armed Forces. Although the Immigration and Nationality Act allows noncitizen service members to acquire citizenship, some veterans may not apply or may not satisfy all eligibility criteria. If the Department of Homeland Security (DHS) determines that a noncitizen veteran is potentially removable, the veteran may be subject to administrative immigration enforcement and removal. ICE, among other things, is responsible for identifying and removing aliens who violate U.S. immigration law. GAO was asked to review issues related to the removal of noncitizen veterans. This report examines (1) the extent to which ICE has developed and implemented policies for handling and tracking cases of potentially removable veterans; (2) how federal agencies facilitate the naturalization of noncitizen service members and veterans, and what is known about the number who have applied for naturalization; and (3) how removal affects veterans' eligibility for and access to VA benefits and services. GAO reviewed documentation, met with agency officials, analyzed available data on veterans placed in removal proceedings, and conducted a review of removed veterans' alien files. GAO also analyzed data on military naturalization applications. What GAO Found U.S. Immigration and Customs Enforcement (ICE) has developed policies for handling cases of noncitizen veterans who may be subject to removal from the United States, but does not consistently adhere to those policies, and does not consistently identify and track such veterans. When ICE agents and officers learn they have encountered a potentially removable veteran, ICE policies require them to take additional steps to proceed with the case. GAO found that ICE did not consistently follow its policies involving veterans who were placed in removal proceedings from fiscal years 2013 through 2018. Consistent implementation of its policies would help ICE better ensure that veterans receive appropriate levels of review before they are placed in removal proceedings. Additionally, ICE has not developed a policy to identify and document all military veterans it encounters during interviews, and in cases when agents and officers do learn they have encountered a veteran, ICE does not maintain complete electronic data. Therefore, ICE does not have reasonable assurance that it is consistently implementing its policies for handling veterans' cases. U.S. Citizenship and Immigration Services (USCIS) and the Department of Defense (DOD) have policies facilitating the naturalization of noncitizen service members and veterans, and provide informational resources to noncitizen service members seeking naturalization. The number of military naturalization applications received by USCIS declined sharply from fiscal years 2017 to 2018, resulting in a decreased number of applications approved in fiscal year 2018. USCIS and DOD officials attributed this decline to several DOD policy changes that reduced the number of noncitizens joining the military . Citizenship status, including removal history, does not affect a veteran's eligibility for Department of Veterans Affairs (VA) benefits and services. However, living abroad affects eligibility for certain VA benefits and services. Veterans living abroad may also experience challenges accessing certain benefits and services, such as slower disability claim processing. What GAO Recommends GAO recommends that ICE (1) ensure consistent implementation of its existing policies for handling veterans' cases; (2) develop a policy or revise its current policies to identify and document veterans; and (3) collect and maintain complete data on veterans in removal proceedings or who have been removed. DHS concurred with GAO's recommendations.
gao_GAO-20-394
gao_GAO-20-394_0
Background U.S. Export Controls The U.S. government implements export controls to manage risks associated with exporting sensitive items while ensuring that legitimate trade can still occur, and to advance U.S. national security and foreign policy objectives. These export controls are governed by a complex set of laws, regulations, and processes that multiple federal agencies administer to ensure compliance. State and Commerce each play a significant role in the implementation of U.S. export controls. State controls the export of sensitive military items, known as defense articles and defense services, such as tanks, fighter aircraft, missiles, and military training, which it lists on the U.S. Munitions List (USML). Commerce controls the export of U.S.-origin items with both commercial and military applications (known as “dual-use” items), such as computers, sensors and lasers, and telecommunications equipment, as well as less sensitive military items, which it lists on the Commerce Control List (CCL). Items subject to State and Commerce jurisdiction are governed by separate laws and regulations. The Arms Export Control Act of 1976, as amended, provides the statutory authority to control the export of defense articles and services, which the President delegated to the Secretary of State. State’s ITAR implements this authority and identifies the specific types of items subject to control in the USML. Within State, the Directorate of Defense Trade Controls (DDTC) is responsible for implementing controls on the export of these items. The Export Control Reform Act of 2018 provides the statutory authority for Commerce to control the export of less sensitive military items not on the USML, dual- use items, and basic commercial items. In general, items subject to the EAR include commodities, software, and technology. Commerce’s EAR, which contains the CCL, implements this authority. Commerce’s Bureau of Industry and Security (BIS) is responsible for administering these export controls. DDTC and BIS control the export of items within their respective jurisdictions by requiring, in certain instances, a license or other authorization to export an item. Whether a license is required will generally depend on the intended destination, end-use and end-user, and the item’s classification. Generally, unless a license exemption or exception applies, exporters submit a license application to DDTC if their items are controlled on the USML, or to BIS if their items are controlled on the CCL. In addition to the shipment of tangible commodities or the tangible or intangible transfer of software or technology outside of the United States, export control regulations also consider the transfer or release of certain U.S. technology or source code to a foreign person in the United States to be an export. These transfers or releases are commonly referred to as “deemed exports” and can take the form of written, oral, or visual disclosure of technology or source code. Under the ITAR, technical data is controlled for all exports, including deemed exports. Under the EAR, technology and source code are controlled for the purpose of deemed exports. Export Controls in the University Environment Export-controlled items or source code used in U.S. universities’ research activities may be subject to export controls. Such activities could include shipping an export-controlled item—such as certain biological samples or research equipment—overseas. Additionally, the release of export- controlled items or source code in connection with research activities to a foreign student or scholar could qualify as a deemed export requiring a license. U.S. universities may be exempt from or not subject to export controls if the information they are planning to release falls into one of three categories: published information or information in the public domain, certain academic information, or fundamental research. Published information or information in the public domain: Under the ITAR, information that is published and generally available in the public domain through specific methods is not considered to be technical data, and is therefore not subject to ITAR export licensing requirements. Under the EAR, unclassified technology or software that has been made available to the public without restrictions upon its further dissemination is considered to be published and is therefore not subject to the EAR. Certain academic information: Under the ITAR, information regarding general scientific, mathematical, or engineering principles commonly taught in schools is not included in the definition of technical data and is not subject to ITAR export controls. Similarly, information that is taught in catalog-listed courses or associated teaching laboratories of academic institutions is not subject to the EAR. Fundamental research: Fundamental research is not subject to the ITAR or the EAR. The ITAR defines fundamental research as basic and applied research in science and engineering where the resulting information is ordinarily published and shared broadly within the scientific community, as distinguished from research the results of which are restricted for proprietary reasons or specific U.S. government access and dissemination controls. The EAR defines fundamental research as research in science, engineering, or mathematics, the results of which ordinarily are published and shared broadly within the research community, and for which the researchers have not accepted restrictions for proprietary or national security reasons. Under the EAR, software and technology that arise during or result from fundamental research that is intended to be published is also not subject to the EAR. For example, a foreign person may be able to read research reports or view presentations that result from fundamental research and are intended to be published without the university obtaining a license. However, if that research involves software or technology that is subject to the ITAR or the EAR and is not intended to be published or produces an item that is subject to the ITAR or the EAR, the foreign person generally could not participate in the research without the university securing an export license. Foreign Threats to Universities and Vulnerabilities in U.S. Export Controls According to the FBI and DOD, as foreign adversaries use increasingly sophisticated and creative methodologies to exploit America’s free and open education environment, the United States faces an ever-greater challenge to strike a sustainable balance between unrestricted sharing and sufficient security within the U.S. university research environment. According to a 2019 FBI white paper, the inclusion of foreign students and scholars at U.S. universities entails both a substantial benefit and a notable risk. Specifically, the FBI reported that while many of these foreign students and scholars contribute to advanced research, the development of cutting-edge technology in an open research environment puts academia at risk for exploitation by foreign actors who do not follow U.S. laws and regulations. Additionally, a DOD report from September 2019 stated that research targeted by foreign talent programs includes topics relevant to U.S. national defense. According to the FBI, while the majority of foreign students and scholars do not pose a threat to their host institution, fellow classmates, or research fields, some foreign actors seek to illicitly or illegitimately acquire U.S. academic research and information to advance their home countries’ scientific, economic, and military development goals. By doing so, they can save their home countries significant time, money, and resources while achieving generational advances in technology. The U.S. government, including GAO, has long identified vulnerabilities in U.S. agencies’ efforts to protect U.S. research from foreign entities who might seek to exploit the openness of the U.S. academic environment. In prior GAO reports, we identified weaknesses in the deemed export control system that could allow the unauthorized transfer or release of export-controlled items to foreign persons in the United States. Moreover, since 2007, we have identified the protection of technologies critical to U.S. national security interests—including through U.S. export controls—as a high-risk area. More recently, the Senate Homeland Security and Governmental Affairs Committee reported that federal agencies need to do more to mitigate the threat to American universities by foreign persons seeking to undermine the integrity of the American research enterprise and endanger our national security. Foreign Students and Scholars at U.S. Universities More than 1.2 million foreign students and 21,000 foreign scholars studied or worked at U.S. universities in 2018. Nearly a third of foreign students studying in the United States are from China, and a large proportion of Chinese students major in science, technology, engineering and mathematics (STEM) fields (see table 1). In addition, 10 countries accounted for about 70 percent of the more than 21,000 foreign scholars who worked at U.S. universities in 2018 (see table 2). Federal Funding for University Research The federal government obligated approximately $33 billion for U.S. universities for research and development in fiscal year 2017. The National Institutes of Health obligated approximately 54 percent of federal research and development funding provided to U.S. universities that year. The Department of Energy, DOD, and the National Aeronautics and Space Administration also obligated significant funding for universities for research (see fig. 1). State and Commerce Have Provided Guidance and Conducted Outreach, but Universities Expressed Concerns about Their Adequacy for Addressing University-Specific Issues State’s DDTC and Commerce’s BIS have developed export compliance- related guidance and conducted outreach to support all exporters’ understanding of and compliance with the regulations. However, university and association officials raised concerns that DDTC and BIS guidance and outreach does not adequately address university-specific export compliance issues. In addition, DDTC’s export compliance guidelines do not explicitly promote risk assessments, identified by GAO as a key element for determining whether an entity’s processes address current threats. State and Commerce Have Provided Export Control-Related Guidance and Conducted Outreach to Support Exporters’ Compliance Efforts State’s DDTC and Commerce’s BIS have developed various forms of written guidance and conducted outreach to support all exporters’ understanding of export control regulations. The ITAR and the EAR regulations apply to all exporters, whether universities, private entities, non-profits, or government entities, and according to DDTC and BIS officials, the guidance and outreach materials they have developed are similarly applicable to all potential exporting entities, including universities. Written Guidance Both DDTC and BIS provide written guidance intended to (1) increase awareness of applicable export control regulations, (2) provide specific instructions or tools for complying with those regulations, and (3) dispense transaction or entity-specific information or guidance for all exporters. For example, DDTC’s and BIS’s websites include general information about their respective export control regulations, including guidance on when an export license is needed and how such a license can be procured. DDTC highlights useful resources available on its website in a letter it sends to entities, including universities, when those entities register with DDTC as potential exporters of ITAR-controlled items. BIS’s website includes information about deemed exports, which one BIS official said is particularly relevant to universities. Both websites also include sets of frequently asked questions. DDTC and BIS have also developed guidance that provides specific instructions or tools for complying with the agencies’ regulations, including export compliance guidelines (see below for more information about these guidelines) and decision tools for classifying items subject to the ITAR and the EAR. For example, DDTC offers exporters an online tool to help them identify steps to follow in reviewing the USML and in classifying items subject to the ITAR. Similarly, BIS provides exporters with (1) online tools to help them classify items subject to the EAR and (2) guidelines for completing the license application for both deemed exports and tangible exports, such as chemical and biological items. Finally, both DDTC and BIS offer several mechanisms for obtaining transaction- or entity-specific information or guidance. For example, DDTC and BIS provide advisory opinions when an exporter requests a formal answer to an export control-related question, and both agencies operate a hotline to provide informal guidance to potential exporters. In addition, BIS reviews and provides feedback on export compliance manuals adopted by exporting entities, including universities, when requested. Exporters may also request a commodity jurisdiction classification from DDTC and BIS to determine whether a commodity is subject to the ITAR or the EAR. Training and Outreach Both agencies also provide training, present at conferences, and conduct site visits to further educate exporters. For example, DDTC provides in- house seminars on export licensing basics approximately twice a year. BIS has developed and conducts various types of training related to export control compliance, including training videos that are publicly available on its website. BIS also hosts regional seminars and an annual conference in Washington, D.C., on export controls and export compliance. Both DDTC and BIS participate in various conferences. For example, DDTC and BIS participate in an annual conference affiliated with the Association of University Export Control Officers, where agency officials discuss topics such as regulatory updates, license statistics, and export compliance best practices. In fiscal year 2019, DDTC participated in 52 outreach events, two of which were university-specific. During that year, BIS conducted or participated in over 80 outreach events, six of which were university-specific. DDTC and BIS also conduct site visits to learn more about a given entity’s export compliance program and provide feedback, among other things. According to officials, DDTC conducted three university site visits from 2015 through 2019. Similarly, according to officials, BIS conducted two university site visits from 2013 through 2019. Further, officials at both agencies stated that they share information at outreach events about export compliance program strengths and weaknesses they identified during site visits. Universities Expressed Concerns that Agency Guidance and Outreach Does Not Adequately Address University- Specific Export Compliance Issues Officials from universities in our sample and university association officials told us that most DDTC and BIS export control-related guidance and outreach does not address those issues most relevant to the university export compliance environment and that additional guidance and outreach efforts would be useful. For example, according to association officials and officials at six of the nine universities we visited, it is sometimes difficult to understand how to implement in the university environment what they perceive to be industry-focused guidance developed by DDTC and BIS. Some of these officials further noted that the export compliance environment for industry typically differs from that for academia. Specifically, university and association officials noted that companies are typically focused on developing proprietary technologies, whereas universities are primarily focused on expanding knowledge through fundamental and collaborative research. In addition, officials from two universities stated that researchers typically do not see themselves as exporters, which makes it difficult to explain to them how export control regulations pertain to university research. For example, one official told us that it is difficult to explain the concept of a deemed export within an open, academic setting to university researchers. Officials at two universities also noted that the term “defense service,” a type of export subject to the ITAR, is a difficult concept to explain to university researchers who do not consider their work to be a “service.” Officials at four universities told us that they rely on university associations to develop a common understanding or interpretation of the regulations for the university context. For example, officials from one university said that university associations are a resource for sharing information and best practices regarding export compliance in the university environment. An official from another university stated that although she reviews the DDTC and BIS websites periodically for regulatory updates, she relies on university associations to explain how any updates affect universities. Some university officials stated that some agency outreach efforts are useful, but others said that more outreach is needed. Specifically, five university officials mentioned specific agency training and outreach efforts as being useful. For example, the officials said they appreciate that BIS conducts regional seminars for all exporters, which they said are easier to get to than events in Washington, D.C. One of these officials further noted that these seminars discuss how to set up an effective compliance program. However, four university officials stated that additional outreach efforts by both DDTC and BIS were needed. For example, two of these officials suggested that agencies consider additional training for universities, such as webinars or videos providing examples of simple export scenarios for university audiences, to clarify the intent of the export control regulations and explain how regulatory requirements pertain to university research. In discussing additional guidance needs, university and association officials told us that a set of all-encompassing, university-specific guidance is not necessary, but that additional guidance addressing specific topics that are relevant to universities would be useful. For example, one university association told us that additional DDTC and BIS guidance could take the form of frequently asked questions regarding issues of interest to universities, such as deemed exports and fundamental research. Similarly, one university export control officer stated that additional sets of frequently asked questions focused on issues most relevant to university export compliance, examples of university export compliance best practices, and examples of export control violations committed by universities would be particularly helpful. This export control officer explained that such guidance would help her and her colleagues (1) explain why the export control regulations are relevant for university researchers and (2) better explain the need for additional compliance resources to university management. University and association officials further stated that it would be helpful if DDTC and BIS would work with university associations to develop guidance that would support universities’ efforts to interpret the regulations consistently. These officials said that a stronger partnership between the regulatory agencies and universities would support agencies’ understanding of the university environment and result in better guidance for universities. They noted, for example, that soliciting university input on existing guidance and suggestions for additional guidance could provide DDTC and BIS with helpful information about the challenges that universities face in complying with export control regulations in their distinct environment. DDTC officials acknowledged that additional guidance addressing university-specific issues could be helpful and agreed that it may be difficult for university export control officers to explain export control regulations to researchers. They told us that it could be useful for the department to draft white papers, sets of frequently asked questions, or tip sheets specifically addressing issues most relevant to universities. For example, officials suggested that DDTC could develop tips on what may constitute a defense service in the university context. DDTC officials explained that they had not drafted such guidance because of resource constraints and other priorities. When we asked BIS officials about the potential need for university- specific guidance, one official identified some currently available guidance that could be most useful to universities. For example, BIS maintains a set of frequently asked questions and a YouTube webinar concerning deemed exports, and has guidance related to fundamental research available on its website. According to BIS, it regularly updates guidance related to deemed exports and fundamental research, including in connection with regulatory changes that affected both areas in 2016. GAO’s Standards for Internal Control in the Federal Government state that management should communicate with, and obtain information from, external parties using established reporting lines. Although BIS has provided written guidance that is relevant to universities and both DDTC and BIS conduct university-specific outreach, officials at universities we visited and associations we interviewed raised concerns about the adequacy of this guidance and outreach for the university research environment. Without additional guidance and outreach from DDTC and BIS that addresses issues most relevant to universities, some universities may utilize guidance, training, or other resources developed by other entities that may not facilitate compliance with export control regulations in the way that DDTC and BIS intended. Hence, universities may be at risk of failing to comply with export control regulations and properly safeguard export-controlled items from foreign students and scholars who are not authorized under deemed export licenses to receive such items. In addition, such university-focused guidance is consistent with the Export Control Reform Act of 2018, which requires the President to enforce export controls by providing guidance in a form that facilitates compliance by academic institutions and other entities. State’s Written Guidance Does Not Explicitly Promote Risk Assessments Although State’s DDTC and Commerce’s BIS officials identified their respective export compliance guidelines, available on the agencies’ websites, as key sources of written guidance for supporting exporters’ compliance with each agency’s export control regulations, DDTC’s compliance guidelines do not explicitly promote risk assessments. Both sets of export compliance guidelines include similar elements that the agencies consider critical for an effective export compliance program. For example, both sets of guidelines include elements related to management commitment, recordkeeping, and training. However, DDTC’s guidelines do not advise entities on how to assess risk, which GAO has identified as a key element for determining whether an entity’s processes address current threats. BIS Guidelines. BIS’s export compliance guidelines identify eight elements of an effective export compliance program. BIS officials stated that the agency’s guidelines provide a useful compliance framework for all exporters, including universities. These guidelines include information about recordkeeping, conducting internal audits, performing risk assessments, and training, among other elements. BIS’s guidelines also provide templates, checklists, specific examples, and other tools exporters may use to develop an export compliance program or enhance an existing program. For example, the guidelines include a summary of potential risks involved in each phase of the exporting process with a list of tools to mitigate such risks. The guidelines also include an audit module tool to help exporters review and revise their current compliance program with a set of checklists for each of the eight elements. DDTC Guidelines. DDTC’s export compliance guidelines include nine elements that it has identified as important aspects of an effective export compliance program. According to DDTC, its guidelines are also applicable to all exporters, including universities, and the agency references them in a confirmation letter when entities register as exporters. The guidelines include information about organizational structure, corporate commitment and policy, internal monitoring, and training, among other elements. The guidelines also provide examples of questions a compliance program should address for some elements. However, DDTC’s export compliance guidelines lack a risk assessment element. Risk assessments provide entities with an opportunity to review their processes to determine whether the processes in place address current threats. According to DDTC, the agency has not added guidance related to risk assessments to the export compliance guidelines because it assumes that exporters conduct a risk assessment for each compliance element as a matter of course. GAO’s Standards for Internal Control in the Federal Government state that management should communicate quality information externally so that external parties can help the entity achieve its objectives and address related risks. Further, according to an Office of Management and Budget bulletin, agencies increasingly have relied on guidance documents to inform the public and to provide direction to their staffs as the scope and complexity of regulatory programs have grown. Exporters, including universities, may not conduct periodic risk assessments if DDTC’s guidance does not encourage them to do so. As such, they may be unaware of potential threats and may not take appropriate measures to protect export- controlled items. Universities Identified Challenges Working with and Obtaining Guidance from Other Agencies University and association officials we interviewed identified challenges working with and obtaining guidance from federal agencies that fund research and monitor threats to the United States, including threats to research security. Specifically, university and association officials identified the following three challenges working with and obtaining guidance from these agencies: (1) federal agencies are developing different requirements for reporting financial conflicts of interest to address foreign influence issues, (2) some agencies provide briefings and other forms of guidance related to export controls and foreign threats that do not sufficiently address universities’ needs, and (3) DOD officials inconsistently interpret export control regulations and misunderstand what constitutes fundamental research. Agencies are taking steps to address some of these challenges. For example, an interagency working group established by the White House Office of Science and Technology Policy and individual federal agencies are undertaking efforts to address university concerns regarding inconsistent financial conflict of interest reporting requirements and the lack of relevant, university-specific resources to address threats identified by some agencies. However, the actions that DOD plans to take to address agency officials’ inconsistent interpretation of the regulations and their misunderstanding of the term fundamental research may not fully address the challenge identified by university and association officials. Universities Identified Inconsistent Reporting Requirements as a Challenge University and association officials expressed concerns that federal agencies are developing different requirements for reporting financial or other conflicts of interest, such as foreign funding, but some of these differences in reporting requirements may be necessary to address varying agency-specific legal requirements. For example, recent reporting guidance from the National Institutes of Health reminds researchers to report all sources of support, including support for laboratory personnel and the provision of materials that are not freely available, whereas the most recent guidance from DOD does not include such clarification for what constitutes “support.” Although each agency has a separate mission and separate legal authorities, which may require agencies to have different financial or other conflict of interest reporting requirements, officials at several universities and associations discussed the challenges they face in complying with these varied reporting requirements. Representatives from one university association explained that these new requirements are especially challenging for universities because they typically accept funding from multiple agencies. In addition, officials from one university stated that the variation across the agencies’ reporting requirements makes it difficult to develop one process to support researchers’ efforts to comply with them. According to university and association officials, universities will need to spend more time and resources to understand and comply with each set of requirements. Moreover, one association official told us there is more room for universities to make mistakes when each agency develops different requirements to deal with the same issue. An interagency working group established by the White House Office of Science and Technology Policy is undertaking efforts to address university concerns regarding inconsistent financial conflict of interest reporting requirements. In May 2019, the Office of Science and Technology Policy established the Joint Committee on the Research Environment (JCORE), an interagency effort to address research security and other related issues. According to officials in the Office of Science and Technology Policy, JCORE has drafted one set of coordinated guidance for funding agencies to ensure that funding agencies consistently require researchers to report the same types of information regarding potential conflicts of interest. In addition, JCORE has drafted a set of non-binding guidelines for universities to support their efforts to comply with conflict of interest reporting requirements. Officials stated that the draft guidance for funding agencies and the non-binding guidelines for universities were under review as of January 2020. Officials further stated that JCORE is developing a set of case studies and other materials that federal agencies will be able to use to educate researchers and universities about the types of situations that represent a potential conflict of interest. Universities Cited a Lack of University-Specific Resources for Addressing Threats Identified by Some Agencies as a Challenge Agencies such as the FBI, DHS, BIS’s Office of Export Enforcement, and DOD’s Defense Counterintelligence and Security Agency provide briefings and other forms of guidance related to export controls and foreign threats. For example, officials at these agencies provide briefings to individual universities or to groups of universities during university association events, such as the annual Association of University Export Control Officers conference and the annual Academic Security Conference hosted by the Texas A&M University System. In addition, DHS identified the 11 universities with the largest number of foreign students studying in STEM fields in 2018 to target university outreach efforts in late 2018 and early 2019. DHS developed a template presentation for DHS field offices to use during their outreach to these universities to increase awareness of export control laws. According to DHS, it plans to expand this effort to target the top 60 universities with foreign students in STEM fields. The Department of Justice and BIS’s Office of Export Enforcement have both published reports summarizing recent major U.S. export enforcement-related criminal and administrative prosecutions. Some university officials told us that the briefings and other information that some agencies provide are helpful for improving their awareness of threats. However, officials at five of the nine universities we visited and officials from three university associations said that these briefings and other information are not as useful as they could be. Some of these officials cited the following reasons for why they did not find such information to be useful: Classified information cannot be shared widely: Some university officials and an association representative stated that some agencies often provide classified briefings and materials that they cannot share widely with the university community. One university official said that it would be helpful if agencies, where possible, could also provide some unclassified information with clear examples that could then be shared with researchers about current threats and what these threats may look like in a university setting. Without such information, university officials are restricted in how they can use the threat-related information they obtain for raising awareness on campus, according to a university association official. Moreover, another university official stated that if export control officers cannot share relevant threat information with the university’s administration because of classification issues, the university may not get the resources it needs to improve its compliance programs and properly comply with export control regulations. Guidance and threat information does not address the university environment or utilizes outdated examples: Representatives from three university associations and one university stated that some federal agencies do not provide guidance and threat information that address the university research environment, and two associations said that any university-specific examples federal agencies provide during briefings are outdated, which limits the relevancy of guidance and threat information to the university environment. For example, an official from one association explained that in 2015 the FBI provided a threat briefing at an association meeting and requested that university officials contact the FBI if a researcher had, among other things, published in an international scientific journal or attended an international conference, or if any graduate students worked in university laboratories late at night. This official noted that these FBI officials did not understand that researchers must undertake such activities to obtain tenure and that it is common for students to work late at night. In addition, according to an official from one association, when university officials ask the FBI to provide recent examples of foreign students stealing sensitive or export-controlled items from U.S. universities, the FBI often cites cases that occurred more than 10 years ago. He further stated that federal agencies are raising alarms that universities are vulnerable to foreign theft of export-controlled items without any concrete, recent examples. FBI threat briefings lack actionable guidance: University officials told us that many FBI threat briefings are not helpful because they do not provide actionable guidance for addressing identified threats, which limits universities’ understanding of how to address them. For example, one university official stated that the FBI briefings do not provide any detailed information about what attendees should do with the information they obtain. He further stated that the briefings would be more beneficial if the FBI provided prescriptive guidance on how to use the information. DOD and the FBI are taking steps to partner with academia to address challenges regarding information sharing. DOD is undertaking several collaborative efforts with academia in response to Section 1286(d) of the 2019 National Defense Authorization Act, which directed the Secretary of Defense to establish an initiative to support protection of national security academic researchers from undue influence and other security threats. For example, DOD partnered with the National Academy of Engineering to establish the “Roundtable on Linking Defense Basic Research to Leading Academia Research and Engineering Communities,” or the “Deans’ Roundtable.” The Deans’ Roundtable brings DOD leadership together with deans from U.S. university engineering programs to facilitate dialogue between DOD and the academic research community on research protection. The roundtable’s objectives are to better understand major issues in the defense research community and to form working groups to help craft potential solutions to challenges identified by the roundtable. The roundtable is expected to help address issues of research espionage by foreign governments on university campuses and inform senior DOD officials about technological developments on university campuses, among other efforts. The FBI partnered with the Academic Security and Counter Exploitation Program, a university-led association focused on research security, to produce a series of unclassified “awareness-raising” materials for university audiences. According to FBI officials and a member of the Academic Security and Counter Exploitation Executive Committee, the FBI recognized that university officials were frustrated that relevant FBI documents regarding the foreign threat to U.S. research were classified. The association’s Executive Committee member further explained that this created significant restrictions on the way university officials could use the materials for awareness and training efforts on campus. He further noted that many of these “awareness-raising” materials were tone- deaf to the needs of academia and did not explain how the threats were related to university researchers’ work. The Academic Security and Counter Exploitation Executive Committee worked with the FBI to revise existing FBI handouts to create a series of academic-focused, unclassified documents suitable for inclusion in awareness and training programs on university campuses. For example, they revised a FBI handout regarding the threat that China poses to corporate America to instead focus on the threat that China poses to academia. Universities Identified DOD Officials’ Inconsistent Interpretation of Export Control Regulations as a Challenge, and DOD’s Planned Actions Will Not Fully Address the Issue Officials from multiple universities and associations stated that DOD officials inconsistently interpret export control regulations and misunderstand the term fundamental research and its implications when providing funding for university research, which some officials said leads to confusion, results in contract delays, and may limit universities’ ability to conduct research for DOD. DOD officials acknowledged that some officials have inconsistently interpreted the regulations. Moreover, DOD reported to Congress in September 2019 that it is mindful of the fact that reducing the quantity and competitiveness of early ideas flowing through the university system to the department by non-judicious use of controls could have negative consequences. Officials at four of the nine universities we visited identified DOD officials’ inconsistent interpretation of the regulations and their misunderstanding of what constitutes fundamental research as a challenge they face in complying with export control regulations. For example, officials at three universities asserted that DOD includes contract clauses, such as export control-related clauses, that are not relevant to or conflict with other stated terms in the contract, in some cases. Officials at two universities further stated that there appears to be an internal disagreement between the program officers and contracting officers about how to interpret some aspects of export control regulations. One university official said the university tries to negotiate with DOD when contracts that the university perceives as only containing fundamental research include export control- related clauses; however, the official said these types of delays slow the pace of research. Moreover, university association officials noted that member universities are reporting that DOD is increasingly including publication restrictions in research contracts for projects that the universities believe only entail fundamental research. Research does not qualify as fundamental research if the researcher accepts any restrictions on the publication of the information resulting from the research. Officials from one association stated that DOD is reluctant to remove publication restrictions from award contracts even when it acknowledges that the work may only involve fundamental research. As a result, universities that only accept contracts for fundamental research may decline an awarded contract if the conditions for the award vary from initial expectations, which may lead to a loss in research funding for many universities focused on fundamental research. In 2008 and 2010, DOD issued memoranda to its personnel providing clarifying guidance concerning fundamental research and directed that information about contracted fundamental research be included in general training modules for research program personnel. For example, these memoranda state that DOD must not place restrictions on subcontracted unclassified research that has been scoped, negotiated, and determined to be fundamental research within the definition of National Security Decision Directive 189 according to the prime contractor and research performer and certified by the contracting component, except as provided in applicable federal statutes, regulations, or executive orders. These memoranda also state that the effective implementation of the guidance requires that all DOD personnel involved in the acquisition and monitoring of fundamental research have a clear and common understanding of the relevant statutes, regulations, and policies, including the definitions of key terms. To implement these memoranda, DOD also amended the defense federal acquisition regulations in 2013 to update the relevant contract clause for inclusion in DOD contracts. The Deputy Director for Basic Research at DOD stated that most program officers and contracting officers are familiar with the export control regulations and understand the term fundamental research and how to interpret it in the context of university research, but acknowledged that some officials have inconsistently interpreted the regulations and misinterpreted the term fundamental research. Specifically, DOD officials stated that program officers and contracting officers who frequently work with universities through basic research grants understand what constitutes fundamental research; however, program officers and contracting officers working with applied research contracts may not be as familiar with it or with engaging with universities. Furthermore, DOD officials acknowledged that although DOD has developed export control-related training, it does not require program officers and contracting officers to take this training. Officials stated that not all program officers and contracting officers work with universities, so they do not all need to take training on export control regulations. To address these and other research-related concerns, DOD’s Office of Basic Research convened a workshop for basic research program officers in October 2019 to facilitate the sharing of best practices and identify any concerns. According to DOD, program officers raised a concern that they need to constantly ensure that the research being conducted is properly categorized as basic or fundamental research and has not transitioned into applied or non-fundamental research in the course of the contract. DOD’s Office of Basic Research is planning to develop a checklist based on input from program officers that program officers can use when determining whether the scope of a research project meets the definition of fundamental research. Following this workshop, a DOD official stated that program officers are best suited to make technical and nuanced fundamental research determinations because program officers have first-hand knowledge about the scope of the research project. These actions, however, may not address the concerns universities raised, because they do not include any effort to further educate contracting officers. Contracting officers may add export control clauses or publication restrictions to a contract award after the program officer writes the original solicitation. Additionally, contracting officers are the individuals with regulatory authority for defense contracts to certify that research is fundamental research. Hence, a checklist for program officers may not fully address program officers’ and contracting officers’ inconsistent interpretation of the regulations, including determining whether university research constitutes fundamental research. Without additional efforts to educate all relevant DOD officials on a clear and common understanding of the relevant statutes, regulations, and policies, as identified by the department’s 2010 memorandum, universities may continue to perceive that DOD officials inconsistently interpret the regulations and misunderstand whether research constitutes fundamental research, potentially hindering DOD-funded research at universities. Universities We Visited Generally Have Developed Export Compliance Policies and Practices Aligned with Agency Guidelines, Though Some Gaps Exist The nine universities we visited have generally developed export compliance policies and practices to safeguard export-controlled items that align with State’s DDTC and Commerce’s BIS export compliance guidelines, but some of the universities’ compliance efforts have weaknesses in certain areas (see fig. 2). We reviewed DDTC’s and BIS’s export compliance guidelines to identify common elements and developed a list of eight elements that the agencies classified as critical for an effective compliance program, such as recordkeeping and training, among others. See table 3 for a description of the eight elements we identified for this assessment. We then interviewed officials at nine universities about their universities’ export compliance policies and practices. We selected universities with annual average expenditures for research and development during the 2013 through 2017 period that ranged from $15 million to over $750 million. In addition, we selected universities on the basis of a number of factors, including total research and development expenditures, number of graduate students, research funding received from certain federal agencies, and geographic dispersion (see app. I for more information about our selection methodology). Finally, we assessed the university officials’ responses against the eight elements of an effective export compliance program to determine the extent to which these universities’ policies and practices align with DDTC’s and BIS’s export compliance guidelines. See appendix III for a detailed description of our assessment of each university’s policies and practices against these elements and a description of the export compliance policies and practices the selected universities have in place. In addition, we reviewed the websites of a generalizable sample of 100 U.S. universities to determine the extent to which these universities provide publicly available information about export control regulations, training, and other topics pertinent to the campus community. In general, the universities with larger research and development expenditures provided more export control-related information on their websites. See appendix IV for the results of this analysis. Most of the Universities We Visited Have Export Compliance Policies and Practices That Generally Align with Agency Guidelines, with More Robust Practices in Four Areas The seven universities with the highest research expenditures among the nine we visited have export compliance policies and practices that generally align with the eight elements we identified from DDTC’s and BIS’s export compliance guidelines, while the two universities with the lowest expenditures among the nine have more weaknesses in their compliance programs. Most of the universities we visited have robust export compliance practices in the following four areas: Management commitment and organizational structure: All nine of the universities we visited have developed policies and practices that fully or partially align with this element. For example, management at seven of the nine universities we visited issued public statements supporting the university’s export compliance program. These statements briefly described export control regulations, discussed the importance of the universities’ compliance with export control regulations, and emphasized the universities’ commitment to compliance efforts. Export authorization and tracking export-controlled items: All but one of the nine universities we visited have developed policies and practices that fully align with this element. For example, officials at all nine of the universities we visited stated that their universities require researchers to submit research proposals to an office charged with reviewing proposals and awards for grants and contracts. When reviewing research proposals or awards, this office will flag those proposals and awards that may be subject to export control regulations for further review, either by the export control officer or another authorized university entity. In addition, officials at seven of the universities said they had developed mechanisms to track any export-controlled items being used or developed by the university. The universities we visited also employ various security mechanisms to safeguard export-controlled items. These include physical security mechanisms, as shown in figure 3, as well as information technology security mechanisms, such as setting up separate networks for researchers using export-controlled data in their research. Recordkeeping: Officials at all nine universities we visited have developed policies and practices that fully align with this element to ensure that they maintain appropriate export control-related records. For example, at least five of the nine universities we visited maintain their export compliance-related records in an electronic database or other electronic system. One of the universities utilizes a system that tracks each research project from start to finish. This system enables officials to search for all export control-flagged research proposals, awards, and technology control plans, among other documents. One of the officials also told us that the system will alert the export control officer to any technology control plans with an upcoming expiration date. Two of the remaining four universities maintained some files electronically and some in hard copy. The other two universities did not discuss how they maintained their files, but identified who is responsible for export control-related recordkeeping and the types of documents they maintain. Reporting violations: All nine universities we visited have developed policies and practices that fully align with this element. Specifically, these universities have developed clear procedures outlining the actions employees should take in the event that potential noncompliance is identified. For example, officials at seven universities told us that they have a compliance hotline that people can use to report suspected violations. Some Universities We Visited Have Gaps in Their Export Compliance Policies and Practices, with Most Gaps Falling into Four Areas Some of the universities we visited have weaknesses in their export compliance programs, particularly in the following four areas: Risk assessment: Four of the nine universities we visited do not currently conduct risk assessments to assess and identify potential risks in their export compliance programs, which may limit their ability to identify potential risks or build safeguards in their export compliance program to address potential risks. Three of these four universities are in the lowest tier for annual research and development expenditures. Training: Two of the nine universities we visited do not provide any formal training for researchers and other officials involved in implementing export control regulations. However, an official from one of the universities said that the university provides access to online export control-related trainings developed by a for-profit entity. The export control officer at the other university said that although the university does not conduct formal training, he conducts frequent outreach and provides materials to increase university officials’ awareness of export control regulations. Internal audits: Four of the nine universities we visited either partially conducted, or did not conduct, internal audits of their export compliance programs. The three universities that partially conducted internal audits have an export control officer who periodically reviews some internal processes but did not have a university audit group outside of the export control office that had reviewed the export compliance program. However, officials from two of these universities stated that their audit office plans to conduct an audit of the export compliance program soon. Export compliance manual: Four of the nine universities we visited have not developed an export compliance manual. According to DDTC and BIS guidelines, exporters are encouraged to develop a manual to document export control-related roles and responsibilities of various offices and officials. The manuals should also describe export control procedures, development of technology control plans for export-controlled work, training requirements, and processes for reporting potential violations, among other topics. Conclusions Research conducted by U.S. universities and supported by visiting foreign students and scholars makes critical contributions to U.S. national security and economic interests. However, the relative openness of the university environment also presents a vulnerability that can be exploited by foreign adversaries. State’s DDTC and Commerce’s BIS administer systems of export controls to minimize these vulnerabilities while allowing legitimate business to occur, and the agencies provide guidance and conduct outreach to facilitate universities’ compliance with these controls. While DDTC and BIS provide some guidance and conduct outreach to universities, university officials told us that this guidance does not adequately address university-specific issues. The universities we visited primarily rely instead on guidance and training provided by other entities, which may not always facilitate compliance with the export control regulations as DDTC and BIS intended. We found that the nine universities we visited have generally developed export compliance policies and practices that align with agency guidance, but some of the universities’ compliance efforts have gaps. Improved guidance and outreach based on feedback from university stakeholders could further strengthen universities’ efforts to identify and protect export-controlled items from unauthorized transfers or releases to foreign students and scholars. This is especially important in light of continued reports of foreign entities’ exploitation of university research. Moreover, DDTC’s export compliance guidelines do not include information concerning risk assessments, a key element for determining whether an entity’s processes address current threats. Four of the nine universities we visited did not conduct risk assessments. Including information about risk assessments in DDTC’s written guidelines regarding the elements of an effective export compliance program would enable DDTC to remind universities and other exporters that conducting risk assessments is a beneficial practice. If exporters, including universities, do not conduct periodic risk assessments, they may be unaware of new threats and, consequently, may not take appropriate measures to protect export-controlled items. Furthermore, universities reported challenges working with DOD because of DOD officials’ inconsistent interpretation of export control regulations, including how to assess whether a university is engaging in fundamental research. DOD officials acknowledged this challenge, but DOD has not taken sufficient action to educate its personnel on the regulations. Without additional action, DOD may continue contributing to confusion and contract delays that hinder legitimate research. Recommendations for Executive Action We are making four recommendations, including two to State, one to Commerce, and one to DOD. Specifically: The Secretary of State should ensure that the Deputy Assistant Secretary for Defense Trade Controls, in consultation with university representatives, provides additional or revises existing guidance and outreach to address university-specific export control issues to further support universities’ understanding and compliance with the International Traffic in Arms Regulations. (Recommendation 1) The Secretary of Commerce should ensure that the Under Secretary for Industry and Security, in consultation with university representatives, provides additional or revises existing guidance and outreach to address university-specific export control issues to further support universities’ understanding and compliance with the Export Administration Regulations. (Recommendation 2) The Secretary of State should ensure that the Deputy Assistant Secretary for Defense Trade Controls revises existing export compliance guidelines to include information concerning periodic risk assessments to remind exporters that it is beneficial to periodically identify, analyze, and respond to new risks as part of an effective International Traffic in Arms Regulations compliance program. (Recommendation 3) The Secretary of Defense should ensure that the Under Secretary of Defense for Research and Engineering takes steps to ensure that its program officers and contracting officers are interpreting export controls consistent with regulations and guidance and consistently determining whether university research constitutes fundamental research. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to Commerce, DHS, DOD, FBI, State, and the White House Office of Science and Technology Policy for comment. In their comments, reproduced in appendixes V and VI, State and DOD concurred with the recommendations directed to them. State also provided information about the actions it plans to take to address recommendations 1 and 3. With respect to recommendation 1, State noted that it is already expanding its outreach to university representatives and planning to issue additional guidance to further support universities’ understanding of the ITAR. With respect to recommendation 3, State noted that it plans to revise existing export compliance guidelines to include information concerning periodic risk assessments. DOD also provided information about actions it plans to take to address recommendation 4. Specifically, DOD stated that it will develop new guidance for DOD personnel to clarify the process for identifying fundamental research, funding contracts containing fundamental research, and monitoring those contracts to ensure that they are performed in compliance with export control regulations and fundamental research policies. DOD also stated that it plans to work with State and Commerce to ensure that the new guidance is consistent with the ITAR and the EAR, respectively. Commerce concurred with recommendation 2, but it did not provide a comment letter in time for publication in the report. DHS, FBI, and the White House Office of Science and Technology Policy informed us that they had no comments. Commerce, DOD, and State provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, and State; the Acting Secretary of Homeland Security; the Attorney General of the United States; the White House Office of Science and Technology Policy; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our report examines (1) the extent to which the Departments of State (State) and Commerce (Commerce) have provided guidance and outreach that supports U.S. universities’ understanding of and compliance with both agencies’ export control regulations, (2) export control-related challenges that U.S. universities face while working with or obtaining guidance from other federal agencies, and (3) the extent to which export compliance policies and practices developed by U.S. universities align with State’s and Commerce’s export compliance guidelines. In addition to the methods discussed below, we reviewed government reports concerning (1) previously identified gaps in the U.S. export control system and (2) the threat that some foreign persons pose to U.S. universities to provide context for all three objectives, and reviewed relevant federal laws and regulations to address all three objectives. We also attended a conference in March 2019 hosted by Association of University Export Control Officers member universities to better understand how universities administer export control regulations and those aspects of the regulations most relevant to universities. We used the information we collected during the conference to inform our planning for our site visits. Federal Data To provide context for all three objectives, we examined federal data concerning (1) the number of foreign students and scholars studying or working at U.S. universities, (2) federal agencies’ research and development funding provided to universities, and (3) U.S. universities’ export license applications. We examined data identifying the country of citizenship for foreign students and scholars studying or working at U.S. universities from 2013 through 2018. We received the foreign student data from the Department of Homeland Security (DHS), which pulled data from its Student and Exchange Visitor Information System. We used these data to identify the top 10 countries sending foreign students to U.S. universities in 2018. DHS also provided data identifying foreign scholars working at U.S. universities based on I-129 filings. The I-129 form is typically filed by a U.S. employer on behalf of a nonimmigrant worker to come to the United States to temporarily perform services or labor or to receive training. We used these data to identify the top 10 countries sending foreign scholars to U.S. universities in 2018. We utilized data collected by the National Science Foundation to determine the amount of research and development funding U.S. universities received from federal agencies in fiscal year 2017. The National Science Foundation collects funding information from federal agencies through its Survey of Federal Funds for Research and Development. We downloaded the data from the agency’s website and analyzed the data to determine how much funding selected federal agencies and the federal government as a whole provided to universities and university-administered Federally Funded Research and Development Centers. Finally, we analyzed State and Commerce data for export license applications received in calendar years 2014 through 2018 to identify trends in U.S. university export license applications and determine the percentage of export license applications from U.S. universities as a share of all export license applications. For both data sets, we reviewed each applicant to verify whether it was a U.S. university, because both agencies provided some data that included license applications submitted by entities that are not U.S. universities, such as associations or foreign universities. We then analyzed the data to determine trends in application results, identify the top 10 destination countries for approved U.S. university export license applications, and identify the top five categories of export-controlled items for export license applications submitted by U.S. universities. We determined that all of these data sources were sufficiently reliable for providing context for our report. Interviews and Reviews of Relevant Documents To address our first objective, we interviewed relevant State and Commerce officials from the Directorate of Defense Trade Controls and Bureau of Industry and Security and reviewed the guidance and outreach materials these agencies developed related to export controls. We also analyzed information regarding their outreach efforts for fiscal year 2019 to identify the number of university-specific outreach events. In addition, we attended (1) the March 2019 Association of University Export Control Officers conference, at which both State and Commerce officials presented to university officials, and (2) Commerce’s annual conference on export controls in Washington, D.C., at which State officials also presented. To address our second objective, we interviewed officials from several agencies that provide research funding to universities, including the Departments of Defense (DOD) and Energy, the National Institutes of Health, and the National Aeronautics and Space Administration, to learn how they work with universities that receive research funding. Additionally, we met with a number of security agencies, including DOD’s Defense Counterintelligence and Security Agency, DHS, and the Federal Bureau of Investigation, and reviewed reports, handouts, and outreach materials regarding either export control regulations or the threat environment to learn how these agencies educate U.S. universities. Finally, we met with the White House Office of Science and Technology Policy to discuss an interagency effort to address research security and other related issues. To identify university perspectives for all three of our objectives, we interviewed (1) representatives from four university associations and (2) officials at nine U.S. universities. Specifically, for our first and second objectives, we interviewed representatives from the Association of University Export Control Officers, Association of American Universities, and Council on Governmental Relations. The Association of University Export Control Officers is a member organization composed of over 270 export control and other compliance officers at U.S. academic institutions to provide a forum for the exchange of information regarding higher education and export, import, and trade sanctions policies. The Association of American Universities represents 65 research universities and seeks to shape policy for higher education, science, and innovation. According to a representative, the association’s membership is composed of university presidents and chancellors. The Council on Governmental Relations provides information to over 185 member universities regarding research administration and compliance, financial oversight, and intellectual property. The association’s membership is mainly composed of Vice Presidents for Research and Directors of Sponsored Research, according to a representative. For our second objective, we also interviewed a representative from the Academic Security and Counter Exploitation Program, whose executive committee includes representatives from 11 universities and university systems. This university-led association is focused on providing a forum within academia for discussions concerning the protection of intellectual property, controlled information, key personnel, and critical technologies at U.S. universities conducting research relevant to national security. For all three of our objectives, we interviewed officials at nine U.S. universities. See below for our selection methodology. Site Visits To inform all three of our objectives, we conducted site visits to nine U.S. universities to speak with various university officials. We selected a non- generalizable sample of nine U.S. universities on the basis of a number of factors, including total research and development expenditures, number of graduate students, research funding received from certain federal agencies, and geographic dispersion. To identify a sample of U.S. research universities, we first examined U.S. university research and development expenditures data collected by the National Science Foundation for the 2013 through 2017 period. The National Science Foundation collects this data from universities through its annual Higher Education Research and Development Survey and we downloaded the data from the agency’s website. We then calculated the average annual research and development expenditures for each university on this list for this period. We limited our scope of universities to those with an annual average total research and development expenditures of over $15 million. This resulted in a total sample size of 292 U.S. universities. To assess the reliability of the data, we reviewed related documentation on the National Science Foundation’s web page regarding the Higher Education Research and Development Survey and dataset. We determined these data to be sufficiently reliable for the purposes of our report. We then reviewed a number of other factors for each of these universities. First, we categorized each of the 292 universities in our sample as public or private. We then identified the number of full-time graduate students for each university on the basis of results from the National Science Foundation’s annual Survey of Graduate Students and Postdoctorates in Science and Engineering (2016), because federal officials told us that graduate students were more likely to conduct research involving items subject to export control regulations than undergraduate students. We also reviewed universities’ security clearance level and membership in a number of associations to identify those universities that may be more aware of research security-related issues. Finally, we downloaded data from the Federal Procurement Data System to identify the total amount of federal contracts for research and development each university in our sample had received from four main funding agencies—DOD, the Department of Energy, the National Institutes of Health, and the National Aeronautics and Space Administration. These four agencies represent four of the five major funding agencies for university research and development in fiscal year 2017. In addition, they represent the four agencies that we determined, in consultation with GAO stakeholders and State and Commerce officials, are most likely to provide funding for research involving items that may be subject to export control regulations. We grouped the universities in our sample into six geographic regions and initially selected 35 universities across these six regions that represented a cross-section of universities, on the basis of the factors discussed above. Ultimately, we selected nine universities for site visits from four of these regions on the basis of university officials’ availability and scheduling considerations. While we sought to include a range of university experiences regarding export control compliance in our non- generalizable sample, the university officials’ views stated in this report do not represent the entirety of the U.S. academic community. During our site visits, we conducted semi-structured interviews with about 80 university officials involved in export compliance on the main campus of nine universities, including officials in the following relevant positions: vice presidents for research, export compliance officers, facility security officers, and officials charged with reviewing grants and contracts, among others. During these interviews, we asked officials about the export control-related policies and practices their university had developed; their roles in implementing those practices; their perspectives concerning guidance and threat-related information from federal agencies; and any challenges they face in complying with export control regulations, among other topics. We also conducted seven focus groups with 44 faculty in Science, Technology, Engineering and Mathematics (STEM) fields. However, we were not able to meet with all of the same types of officials at each university we visited. Assessment of University Export Compliance Policies and Practices against State and Commerce Guidelines To address our third objective, we assessed university officials’ responses concerning export compliance policies and practices against a set of eight elements of an effective export compliance program. We reviewed State’s and Commerce’s guidelines to identify a list of eight common elements that the agencies classified as critical for an effective compliance program. We then assessed the responses of university officials from the nine universities we visited against these eight elements. Within some of the elements, we identified sub-elements for assessing university policies and practices. For example, within the element for management commitment and organizational structure, we identified five sub-elements against which we reviewed university officials’ responses. For each element, we developed a scale for determining whether each university’s export compliance policies and practices fully aligned, partially aligned, or did not align with that element. For example, for the management commitment and organizational structure element, we defined the extent to which each university’s policies and practices aligned with this element as (1) “fully aligned” if policies and practices were in place for at least four out of five sub-elements, (2) “partially aligned” if they were in place for two or three out of five sub-elements, and (3) “not aligned” if they were in place for one or zero of five sub- elements. We conducted this performance audit from February 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Analysis of Export License Application Data for U.S. Universities Although U.S. universities generally promote an open learning environment that is focused on the free exchange of information through fundamental research, some U.S. universities conduct research involving export-controlled items and have applied for export licenses for deemed exports (releases within the United States to foreign persons) and exports of tangible items out of the United States. The Departments of State (State) and Commerce (Commerce) both control the export of items within their respective jurisdictions by requiring a license or other authorization prior to the export of an item. Within State, the Directorate for Defense Trade Controls (DDTC) is responsible for implementing export controls. Similarly, within Commerce, the Bureau of Industry and Security (BIS) is responsible for implementing export controls. State’s DDTC received 597 license applications from U.S. universities in calendar years 2014 through 2018. DDTC provides one of four decisions for each license application—approved, approved with provisos, denied, or returned without action. DDTC approved roughly 79 percent of license applications it received from U.S. universities during this period. Commerce’s BIS reviewed 680 license applications from U.S. universities during this same time period. BIS provides one of three decisions for each license application—approved, denied, or returned without action. BIS approved 74 percent of these license applications. DDTC and BIS denied a small number of license applications submitted by U.S. universities in calendar years 2014 through 2018. Specifically, DDTC denied five applications for exports to Mexico, Sri Lanka, and the United Kingdom, as well as one application involving various destination countries. BIS denied eight applications for exports to China, Iran, and Russia during this same period. See figure 4 for more information regarding the status of U.S. university export license applications submitted to DDTC and BIS in calendar years 2014 through 2018. In calendar years 2014 to 2018, approximately 70 percent of the license applications submitted by U.S. universities that DDTC approved were for exports (including tangible exports and deemed exports) to 10 destination countries or multiple countries. This total included applications that involved various destination countries, which on their own represented 26 percent of total approved applications during this period (see table 4). Similarly, 57 percent of the license applications submitted by U.S. universities that BIS approved in calendar years 2014 through 2018 were for exports (including tangible exports and deemed exports) to 10 countries (see table 5). The top five U.S. Munitions List (USML) categories for which U.S. universities applied for export licenses from DDTC accounted for 77 percent of all applications for calendar years 2014 through 2018. These include license applications for exports controlled under USML categories related to spacecraft, night vision, and missiles (see table 6). The top five categories for which U.S. universities applied for export licenses from BIS accounted for 85 percent of all license applications for calendar years 2014 through 2018. These include license applications for exports specified on the Commerce Control List (CCL) under categories related to chemicals, aerospace, and sensors and lasers, as well as the export of items designated as EAR99 (see table 7). Appendix III: Assessment of University Export Compliance Policies and Practices against Agency Guidelines The Departments of State (State) and Commerce (Commerce) have each developed a set of export compliance guidelines (guidelines), which agency officials identified as key sources of written guidance for supporting exporters’ compliance with the agency’s export control regulations. Both sets of guidelines include similar elements that the agencies have identified as being critical for an effective export compliance program. We reviewed both agencies’ guidelines and developed one set of eight elements of an effective export compliance program, which we then used to assess universities’ export control compliance practices. The eight sections below include descriptions of each element. We selected a non-generalizable sample of nine U.S. universities for site visits on the basis of a number of factors, including total research and development expenditures, number of graduate students, research funding received from certain federal agencies, and geographic dispersion. To learn more about our methodology for selecting universities for site visits, see appendix I. We visited these nine universities to learn about the export control policies and practices that they had developed. During our site visits, we conducted semi-structured interviews with about 80 university officials involved in export compliance, including officials in the following relevant positions: vice presidents for research, export compliance officers, facility security officers, and officials charged with reviewing grants and contracts, among others. We also conducted focus groups with 44 faculty in Science, Technology, Engineering and Mathematics (STEM) fields at seven of the nine universities we visited. During our university site visits, we asked officials about the export control-related policies and practices their universities had developed; their roles in implementing those practices; and the roles and responsibilities of others involved in implementing the university’s export compliance policies and practices, among other topics. We did not independently verify universities’ implementation of the export compliance policies and practices that university officials described during our site visits. We found that the nine universities we visited had generally developed export compliance policies and practices to safeguard export-controlled items that aligned with State and Commerce export compliance guidelines, but that some of the universities’ compliance efforts had weaknesses in certain areas (see fig. 5). In the following sections, we provide a (1) description of each element and (2) summary of the results of our assessment of each university’s policies and practices against each element. Element 1—Management Commitment and Organizational Structure For this element, we assessed universities’ activities within five sub- elements: (1) public management support for the export compliance program, (2) management’s understanding of export control regulations, (3) whether the university had designated an export control officer, (4) sufficiency of resources and authority to conduct export compliance activities, and (5) whether the university had created a clear organizational structure identifying individuals responsible for compliance. See figure 6 for the results of our assessment. Management commitment and organizational structure Entities should have public management support for their compliance program, sufficient resources to conduct compliance activities, and a clear organizational structure identifying individuals responsible for compliance. All nine of the universities we visited have developed policies and practices that fully or partially align with this element concerning management commitment and organizational structure. Specifically, seven universities had practices that fully aligned and two had practices that partially aligned with this element. Below, we provide additional detail on universities’ activities within the following five sub-elements: Provides public management support for export compliance program. Seven of the nine universities we visited have issued public statements from university management supporting the export compliance program. These statements briefly describe export control regulations, discuss the importance of the universities’ compliance with export control regulations, and emphasize university management’s commitment to compliance efforts. In addition, university researchers who participated in our focus groups said that their universities had created an environment in which they felt comfortable reaching out to university staff with compliance-related questions. For example, participants in one of the focus groups told us that compliance officials are not trying to find violations, but are instead focused on building stronger compliance programs and stronger relationships with faculty. Understands export control regulations. Export control officers at all nine of the universities we visited said that university management understands and is knowledgeable about export control regulations and the implications of these regulations on the university’s research and development activities. For example, one export control officer stated that increasing awareness among the administrators, faculty, and staff has taken time, but that the administration now has a good knowledge of export control requirements following the outreach and training that the export control office provided over the last few years. Designates an export control officer position. Eight of the nine universities we visited have export control officers, and of those eight, five have had an export control officer position for over 10 years. The only university we visited that did not have an export control officer position had such a position prior to our visit. Among the universities we visited, this university had the lowest average research and development expenditures from 2013 through 2017—less than $30 million. Provides sufficient resources and authority to conduct export compliance activities. Officials at eight of the nine universities we visited stated that their university had sufficient resources and that relevant officials had adequate authority to conduct export compliance activities. Officials at one university said that they did not have adequate authority to conduct compliance activities, but that this condition might be changing because the export control officers now report directly to the Vice President of Research, which is giving them greater access to university management. Creates a clear organizational structure for export compliance. Officials at seven of the nine universities we visited identified individuals who are involved in export control compliance, including researchers and officials working in procurement, shipping, and contracting, among other things. Five of these seven universities also have export compliance manuals that specifically describe various officials’ export compliance roles and responsibilities. Element 2—Risk Assessment For this element, we assessed the extent to which the university conducted risk assessments of its export compliance program. See figure 7 for the results of our assessment. University Policies and Practices Related to Element 2—Risk Assessment Five of the nine universities we visited have developed policies and practices that fully align with this element concerning risk assessments, while the other four have not developed such policies and practices. Below, we provide additional detail on universities’ risk assessment activities. Of the five universities that told us they conduct risk assessments, three stated that the export control officers periodically or annually conduct internal risk assessments of their export compliance efforts, while the other two described university groups that conduct periodic or annual, university-wide risk assessments that include an assessment of the export compliance program. For example, one university’s export control officer said that her office periodically reviews the university’s export compliance policies and practices to determine whether any gaps exist within the program. She also recently started reviewing her university’s export compliance policies and practices against those of other universities to determine whether other universities had developed any export compliance practices that would be appropriate for her university to emulate. She found, for example, that other universities had implemented a centralized loaner laptop program for researchers traveling abroad to minimize the risk of the theft of sensitive data from personal laptops, and said she hopes to implement such a program at her university. Officials at a university that periodically conducts university- wide risk assessments said they had conducted two such risk assessments since 2015 and were conducting a third assessment during our visit. During one assessment, reviewers recommended that the university increase export control training and staffing, which the export control office is working to address. Another university that conducts annual risk assessments has a research oversight committee that is made up of many subcommittees, including one for export controls. Each subcommittee conducts an annual risk assessment for its compliance area and reports any recommendations for optimizing compliance program effectiveness to the vice president for research. Element 3—Export Authorization and Tracking Export-Controlled Items For this element, we assessed universities’ activities within seven sub- elements: whether the university (1) had processes in place to identify research involving export-controlled items, (2) had processes in place to monitor research to determine whether a license might be required at a later time, (3) tracked any export-controlled items being used or developed, (4) had developed any policies or practices for safeguarding export-controlled items, (5) used technology control plans to document and safeguard export-controlled items, (6) screened and monitored foreign visitors, and (7) screened all foreign parties associated with research projects prior to any export activities. See figure 8 for the results of our assessment. Export authorization and tracking export- controlled items Entities should develop processes to (1) ensure the organization makes correct export decisions, including identifying when U.S. government authorization is required prior to exporting; (2) track and protect any export- controlled items being used or developed by the organization; and (3) screen all parties associated with an export transaction against the U.S. proscribed/restricted parties lists prior to exporting. All but one of the nine universities we visited have developed policies and practices that fully align with this element concerning export authorization and tracking export-controlled items. Below, we provide additional detail on universities’ activities within the seven sub-elements, which fall under three process categories: making export decisions, tracking and safeguarding export-controlled items, and screening foreign parties. Under this category, we assessed universities’ activities in the following two areas: Identifies research involving export-controlled items: Officials at all nine of the universities we visited stated that they had, to varying degrees, developed policies and practices for identifying research projects that might involve items that are subject to export control regulations. Policies and practices for identifying research involving export-controlled items. All nine of the universities we visited require the lead researcher on a project to submit research proposals to an office charged with reviewing proposals and awards for grants and contracts, which we refer to as the Office of Grants and Contracts. The office also reviews the terms and conditions for awards—contracts, grants, or cooperative agreements—to ensure there is nothing in the paperwork that necessitates additional negotiation or that raises a concern related to export controls. When reviewing research proposals or awards, the Office of Grants and Contracts will flag those proposals and awards that may involve items subject to export control regulations for further review, either by the export control officer or another authorized university entity. Tools developed to support officials’ identification of research involving export-controlled items. The universities we visited have developed a variety of tools to support officials’ export control reviews of proposals and awards. For example, seven of the nine universities we visited require the lead researcher on a project to complete a questionnaire that includes export control- related questions when submitting research proposals for review. This questionnaire identifies research proposals that may be subject to export control regulations earlier in the process. In addition, at least four of the universities’ export control officers have developed flowcharts or checklists to help the Office of Grants and Contracts understand when to flag research proposals or awards for further review by the export control officer. In addition, seven of the nine universities we visited require that researchers obtain university approval to conduct research involving export-controlled items. For example, one university’s export control officer said that flagged proposals are sent to an export control review committee for review and approval. The committee reviews the risk associated with each of these research projects and determines whether the university is willing to accept the export control-related risks for that project. Another university requires the lead researcher to obtain approval from the university’s board before accepting an award for research involving export-controlled items. Monitors research to determine whether a license is required after the project starts. Officials at five of the nine universities described practices they had developed to monitor research projects in order to determine whether an export license is required after a research project is underway. For example, one university’s export control officer said her department monitors all research teams that intend to develop hardware or technology during their research because the resulting hardware or technology could be subject to export control regulations. These projects are flagged in the electronic system used to track research projects and the export control officer checks in with the lead researcher periodically to determine the status of the research. An official at another university explained that the university conducts periodic audits of timecards to see if any foreign persons have started charging time to ongoing projects involving export-controlled items. In contrast, one official at another university stated that the university relies on the lead researcher to alert the compliance office of any changes to the research team or research objectives, which may then require a license before continuing research. This official suggested that the lead researchers are better positioned than the export control officer to identify changes to the research that might necessitate obtaining an export license. Tracking and Safeguarding Export-Controlled Items Seven of the nine universities we visited used a variety of mechanisms to track and safeguard export-controlled items, including manual locks, electronic access systems, and other physical security systems, as well as separate computer networks to protect data subject to export control regulations. Under this category, we assessed universities’ activities in the following three areas: Tracks export-controlled items used at the university. Officials at seven of the nine universities we visited said they had developed mechanisms to track any export-controlled items being used or developed by the university. These mechanisms range from maintaining paper files to using electronic systems to track such information. For example, some of the universities maintain physical copies of documents they use to identify and track export-controlled items on campus. Other universities have developed electronic databases to track this information. One university maintains all records related to research projects in one electronic system, including technology control plans. Electronic databases and systems allow the export control officer to quickly identify the on-campus location of export-controlled items and who is working with these items. Safeguards export-controlled items. Eight of the nine universities we visited employ various security mechanisms to protect export- controlled items, including physical and information technology security mechanisms. For example, officials at seven of the nine universities we visited said their university protects export-controlled items by limiting access to spaces where these items are housed with locks or access cards, depending on the space. Three of these universities also require researchers to store export-controlled items in a locked box or storage space, in a locked room, when it is not in use. Some universities also use signs to indicate which spaces are restricted; however, officials at one university said that they do not use signage to indicate restricted spaces because it would draw more attention to the space. Some university officials also described information technology security mechanisms in place to protect data that may be subject to export control regulations. For example, officials at two universities noted the use of isolated or separate networks for researchers working with such data to limit access to this data. Uses technology control plans to document and safeguard export-controlled items. Officials at all nine of the universities we visited stated that researchers used export-controlled items on campus, and officials at eight of these universities said that their universities had developed and implemented technology control plans to safeguard such items. According to Commerce’s export compliance guidelines, organizations that possess or work with export-controlled items and either employ foreign persons or have frequent meetings with foreign persons should create a technology control plan. These plans should include a physical security plan, an information security plan, and training programs, among other components. According to the university officials we interviewed, the export control officer typically works with the lead researcher to develop the technology control plan. Six of the nine universities we visited require the lead researchers to sign the technology control plan to acknowledge that they understand their responsibilities for protecting the export- controlled items identified in the plan, and at least four of these universities require all the members of the research team to sign it as well. In addition, some of the universities we visited conduct annual audits of the technology control plans to ensure proper implementation. For example, an official at one of these universities explained that the university’s annual audit of the technology control plans verifies that security practices outlined in the plan are being followed by the research team and that only those researchers who signed the technology control plan have access to the export- controlled items. An official at another university said he reviews the human resources account information for projects involving export- controlled items annually to verify that only those individuals who have signed the technology control plan are working on those projects. Under this category, we assessed universities’ activities in the following two areas: Screens and monitors foreign visitors. All but one of the nine universities we visited screen and monitor foreign visitors to some extent. Specifically, four of these universities conduct restricted party screenings on all foreign visitors prior to their visit to verify that potential visitors are not on any U.S. government list of restricted or proscribed parties. The other four universities conduct restricted party screenings on some foreign visitors. Three of these four universities said that they do not have a formal process for reviewing foreign visitors and that the effort to invite and review visitors is decentralized. Some of the universities we visited also described how they monitor foreign visitors on campus. For example, officials at two universities said that the foreign visitors’ sponsor is responsible for monitoring their access. The export control officer at a third university told us that he briefs foreign persons visiting restricted spaces on the rules of their visit, including restrictions on camera usage. Screens foreign parties associated with research projects. All nine of the universities we visited use restricted party screening software, which searches several lists that U.S. agencies continually update to screen for restricted or denied parties. Universities and other exporters may be prohibited or restricted from doing business with any individuals or entities identified on one of these lists. Eight of the nine universities we visited screen all foreign individuals and entities associated with a research project using such software. Entities associated with a research project may include foreign researchers on the research team, foreign sponsors, or foreign collaborators, among others. Officials at the ninth university stated that they conduct ad hoc screening for research collaborations with foreign entities. Additionally, one of the universities has compiled a list of all the foreign entities the university works with and conducts weekly restricted party screenings of the foreign entities on this list. Although we focused our assessment on universities’ export compliance policies and practices in place to limit unauthorized deemed exports to foreign persons, officials at some of the universities we visited discussed their efforts to conduct restricted party screenings for other process areas, such as shipping, procurement, and gifts. We found that individuals or offices responsible for these processes at some universities manually screened entities. In one case, this was because the other offices did not have access to the restricted party screening software that the export control officer used. Element 4— Recordkeeping For this element, we assessed the extent to which the university had developed processes for maintaining relevant export control-related records. See figure 9 for the results of our assessment. University Policies and Practices Related to Element 4—Recordkeeping All nine of the universities we visited have developed policies and practices that fully align with this element concerning recordkeeping. Below, we provide additional detail on universities’ recordkeeping activities. At least five of the nine universities we visited maintain their export compliance-related records in an electronic database or other electronic system. For example, one university’s system tracks each research project from start to finish and enables officials to search for all export control-flagged research proposals and awards, technology control plans, and other documents. One of the officials also told us that the system will alert the export control officer to any technology control plans with an upcoming expiration date. Officials at another university explained that their system also enables them to track all the approved technology control plans to quickly identify who is working under a technology control plan on campus at any point in time. Five of the nine universities we visited have written export compliance program manuals, and all of those universities’ manuals include information concerning recordkeeping requirements. For example, four of the five manuals specifically note that export control-related files must be maintained for at least 5 years, and four identify the types of records that need to be maintained, including export reviews, contracts, licenses, technology control plans, and shipping documents, among others. Element 5—Training For this element, we assessed universities’ activities within two sub- elements: whether the university (1) provided export control-related training to all employees involved in exports and (2) required any individuals to complete mandatory export control-related training. See figure 10 for the results of our assessment. University Policies and Practices Related to Element 5—Training Seven of the nine universities we visited have developed policies and practices that fully align with this element concerning training, while the other two have not. Below, we provide additional detail on universities’ activities within the following two sub-elements: Provides export control-related training to all employees involved in exports. Seven of the nine universities we visited stated that they provide export control-related trainings to researchers and other officials involved in the implementation of export control regulations. The export control-related training available to various university officials at the universities we visited varies depending on officials’ level of interaction with export controls. For example, at least five of the universities’ export control officers we interviewed provide export control-related training tailored to the needs of staff whom the university relies on to identify requests for export-controlled items or research involving export-controlled items, including the procurement office and the Office of Grants and Contracts. One export control officer stated that he provides annual training to officials in the Office of Grants and Contracts and provides biannual training to officials in the procurement office. He noted that he spends the most time training officials responsible for reviewing grants and contracts because they are the “gate keepers” for all research proposals and research funding coming into the university. The two universities that do not provide export control-related training to all employees involved in exports do make some export control-related information available. An official from one of the universities said that the university provides access to online export control-related trainings developed by a for- profit entity. The export control officer at the other university said that although the university does not conduct formal training, he conducts frequent outreach and provides materials to increase university officials’ awareness of export control regulations. Conducts mandatory training for researchers conducting research involving export-controlled items. Seven of the nine universities we visited require researchers conducting research involving export-controlled items to complete training with the export control officer prior to beginning their project. Furthermore, researchers at four of these universities are required to complete additional periodic training to refresh their understanding of their compliance roles and responsibilities every 1 to 3 years. Most of the universities that conduct required export control training have varying systems in place to document attendance. For example, three of the nine universities we visited require attendees to sign a form certifying that they have completed the technology control plan training and understand their responsibilities. Element 6—Internal Audits For this element, we assessed the extent to which the university conducted periodic audits of its export control compliance program to assess its effectiveness and integrity. See figure 11 for the results of our assessment. University Policies and Practices Related to Element 6—Internal Audits Eight of the nine universities we visited have developed policies and practices that fully or partially align with this element concerning internal audits, while one of the universities’ policies and practices did not align with this element. Below, we provide additional detail on universities’ efforts to conduct periodic audits of their export control compliance programs to assess their effectiveness and integrity. Eight of the nine universities we visited conduct some type of internal audit to assess the export compliance program’s effectiveness. For example, five export control officers at these universities review all technology control plans annually. One official said her office conducts these annual reviews to ensure that researchers are properly implementing the technology control plans and to determine if the plans need to be updated to address any changes to the export control regulations. In addition, seven of the nine universities we visited have an internal audit group, and four of these audit groups had conducted an audit of the export compliance program within recent years. One university official explained that the audit group’s periodic review of the export compliance program once found that the project management system did not provide enough transparency, and on the basis of this finding, the export control officer was able to petition the university for additional funding to further improve the system in place to track all research projects. According to an official at another university, a quality assurance official at his university audits a sample of research awards each month. Every few months, this official identifies a mistake, such as a failure to screen a foreign party against the lists of restricted parties. When a mistake is identified, the export control officer then screens the foreign party and counsels the person who missed this step. These audits provide universities with an opportunity to identify any potential gaps and continually improve their programs. Element 7—Reporting and Addressing Violations For this element, we assessed the extent to which the university had developed clear procedures outlining the actions employees should take in the event that potential noncompliance is identified. See figure 12 for the results of our assessment. University Policies and Practices Related to Element 7—Reporting and Addressing Violations All nine of the universities we visited have developed policies and practices that fully align with this element concerning the reporting of violations. For example, officials at seven universities told us that they have a compliance hotline that people can use to report suspected violations. Two of these seven universities described additional actions they have taken to further educate their university community about the need to report potential export control violations by adding such information to flyers for the university compliance hotline and advertising this information online. Officials at three of the universities also discussed escalation procedures they have in place to investigate a potential export control violation. For example, one export control officer explained that he is responsible for investigating and reporting any violations. If he needs to initiate an investigation, he will select a team of university officials to enquire about the violation and determine whether a violation has occurred. Following the investigation, the Vice President for Research is responsible for determining whether the university needs to self-disclose a violation to the relevant federal regulatory agency. Five of the nine universities we visited had written export compliance program manuals, and all of those universities’ manuals included information concerning export control violations. For example, some of the manuals include a discussion about the legal and criminal penalties associated with export control violations and emphasize the importance of reporting any potential violations. In addition, two of the universities’ manuals describe the need to develop corrective action plans to prevent recurrence of any violations arising from systemic institutional practices or procedures. Three of the nine universities we visited had voluntarily disclosed export control violations. For example, one university disclosed information regarding a foreign person’s unauthorized access to ITAR-controlled technology because the lead researcher on the project and the procurement office did not know the technology was controlled. According to the export control officer at this university, her office is working with the procurement office to ensure that the future procurement of controlled technologies is flagged for review by the export control officer prior to ordering. This updated procedure will enable the export control officer to work with the lead researcher to develop a technology control plan if the university agrees to support the procurement of such a technology. Element 8—Export Compliance Manual For this element, we assessed the extent to which each university documented export control compliance processes, roles and responsibilities, and other relevant information in a manual to help the university implement its compliance program. See figure 13 for the results of our assessment. University Policies and Practices Related to Element 8—Export Compliance Manual Five of the nine universities we visited have developed export compliance manuals, consistent with this element, while the other four have not. These manuals describe the export control-related roles and responsibilities of various offices and officials on campus, including the export control officer and university researchers, among others. In general, the manuals also describe a number of export control compliance procedures, including the initial review of research proposals, development of technology control plans for research involving export- controlled items, training requirements, and processes for investigating potential violations, among others. Four of the five universities developed manuals in 2015 or earlier, and one university developed a manual in 2018. Three of the universities that published manuals in or before 2015 have updated their manuals at least once, but one of these universities has not updated its manual since 2013. Appendix IV: Analysis of Export Compliance- Related Information on U.S. Universities’ Websites We reviewed the public websites of a statistically generalizable sample of 100 U.S. universities expending more than $15 million for research and development annually, on average, to determine the extent to which universities publicly share export control-related information with their campus community. Using research expenditure data collected by the National Science Foundation for 2013 through 2017, we identified 292 public and private U.S. universities that expended more than $15 million on research and development, on average, over a 5-year period. We selected a stratified, random sample of 100 universities from this list to provide representation from a diverse set of universities in our sample. Next, we created a top and bottom stratum based on total research and development expenditures. The top stratum included universities with expenditures above $250 million (85 universities) and the bottom stratum included universities with expenditures between $15 million and $250 million (207 universities). The sample included 55 universities from the bottom stratum and 45 from the top stratum. Of the 55 universities from the bottom stratum, 30 are public and 25 are private. Of the 45 universities from the top stratum, 25 are public and 20 are private. We assessed the information on the selected universities’ websites against six of the eight elements of an effective export compliance program: 1. Management commitment and organizational structure 2. Export authorization and tracking export-controlled items 5. Reporting and addressing violations 6. Export compliance manual We did not review information related to risk assessments or internal audits on the selected universities’ websites because we did not expect universities to publicly publish this type of information. Management Commitment and Organizational Structure Of the 100 universities in our sample, 77 maintained a dedicated web page for export control-related information, and 79 provided contact information for the person or office responsible for complying with export control regulations on their website. However, only about half of the universities’ websites identified an export control officer or similar official, and only 24 included a public statement from university management supporting the export compliance programs. See table 8 for additional results from our website analysis. Management commitment and organizational structure Entities should have public management support for their compliance program, sufficient resources to conduct compliance activities, and a clear organizational structure identifying individuals responsible for compliance. related information? Export Control Officer or similar title identified? Export control roles and responsibilities of researchers described? Export Authorization and Tracking Export-Controlled Items Export authorization and tracking export- controlled items Entities should develop processes to (1) ensure the organization makes correct export decisions, including identifying when U.S. government authorization is required prior to exporting; (2) track and protect any export- controlled items being used or developed by the organization; and (3) screen all parties associated with an export transaction against the U.S. proscribed/restricted parties lists prior to exporting. A majority of the 100 universities’ websites included information about relevant export regulations and a definition of exports, and almost half provided additional resources or tools for researchers to better understand how or whether their research involves items subject to export control regulations; however, a limited number provided information about practices the university may employ to protect export-controlled items. For example, 74 of the 100 universities published information about the International Traffic in Arms Regulations (ITAR) and the Export Administration Regulations (EAR) on their websites. About half of the universities also maintained a frequently asked questions section concerning export control regulations and about half provided tools such as decision tree matrices to help researchers determine whether an export may require a license. However, less than a third of the universities’ websites included any information about technology control plans or guidance regarding foreign visitors, which are practices that universities may undertake to protect export-controlled items used in university research or other academic activities. For example, only 27 of the 100 universities’ websites contained explanations of when a technology control plan would be necessary. See table 9 for additional results from our website analysis. Recordkeeping Twenty of the 100 universities’ websites provided information regarding export compliance recordkeeping requirements. See table 10 for these results. Training About half of the universities’ websites provided information about export control trainings available online, developed by the university, associations, or for-profit organizations, among others. However, only 21 of the 100 universities’ websites provided information about how to request university-provided, in-person training regarding export compliance. See table 11 for additional results from our website analysis. Reporting and Addressing Violations Only about a quarter of the universities’ websites provided guidance about when to report potential violations, but about half of the universities’ websites provided information about the potential administrative or criminal penalties associated with export control violations. See table 12 for additional results from our website analysis. Reporting and addressing violations Entities should develop clear procedures outlining the actions employees should take in the event that potential noncompliance is identified. Entities should also develop processes for identifying and addressing the root cause of any noncompliant activity. Guidance on when to report a potential export control violation? Export Compliance Manual Less than half of the universities in our sample published an export compliance manual on their website. See table 13 for these results. Appendix V: Comments from the Department of State Appendix VI: Comments from the Department of Defense Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Kimberly Gianopoulos, (202) 512-8612 or gianopoulosk@gao.gov. In addition to the contact named above, Juan Gobel (Assistant Director), Drew Lindsey (Assistant Director), Amanda Bartine (Analyst-in-Charge), Taylor Bright, Debbie Chung, Neil Doherty, Tina Huang, Kathryn Long, Sulayman Njie, and Jina Yu made key contributions to this report. Ashley Alley and Justin Fisher provided technical assistance.
Why GAO Did This Study Over 1.2 million foreign students studied at U.S. universities in 2018 (see fig.). Although foreign students and scholars contribute to U.S. research, there is a risk that they will “export” sensitive knowledge they gain to their home countries. To mitigate this risk, the U.S. government implements export controls. GAO was asked to review agency guidance and universities' security practices. This report examines (1) the extent to which State and Commerce have provided guidance and outreach that supports U.S. universities' understanding of export control regulations; (2) challenges U.S. universities face working with other federal agencies, such as DOD; and (3) the extent to which universities' export compliance practices align with State and Commerce guidelines. GAO reviewed related laws, regulations, and guidance, and interviewed officials from relevant federal agencies and four university associations. GAO also visited nine universities—selected, in part, on the basis of research expenditures and geography—and assessed their compliance practices against agency guidelines. What GAO Found The Departments of State (State) and Commerce (Commerce) have each provided guidance and outreach to support exporters' understanding of and compliance with their separate export control regulations. Exporters, including universities, are subject to these regulations if they ship export-controlled items overseas or if they share such items, including technology or source code, with foreign persons in the United States. University and association officials raised concerns that State and Commerce guidance and outreach does not adequately address export compliance issues that are more common to universities than to industry, such as fundamental research—i.e., research that is ordinarily published and not subject to export control regulations. Without additional guidance and outreach that addresses such issues, universities may not have the information they need to adequately comply with these regulations and properly safeguard export-controlled items. Officials from selected universities and university associations identified three export control-related challenges in working with other federal agencies. For example, university and association officials asserted that Department of Defense (DOD) officials misunderstand the term fundamental research, which may limit universities' ability to conduct research for DOD. DOD acknowledged that some officials have inconsistently interpreted the regulations and that it has not yet fully addressed this challenge. Additionally, university and association officials expressed concerns that threat briefings and other guidance that the Federal Bureau of Investigation (FBI) and Department of Homeland Security provide are not helpful because, for example, they do not contain unclassified information that can be shared widely. To address these concerns, the FBI partnered with a university association to produce a series of unclassified “awareness-raising” materials for university audiences, among other efforts. Seven of the nine universities GAO visited have export compliance policies and practices that generally align with State's and Commerce's export compliance guidelines. For example, most have demonstrated a strong management commitment to export compliance and have robust practices for tracking export-controlled items, recordkeeping, and reporting potential violations. However, GAO identified gaps in some universities' practices in four areas—risk assessments, training, internal audits, and export compliance manuals. What GAO Recommends GAO is making four recommendations, including that State and Commerce should improve their export control guidance and outreach, which may help address gaps in university export control compliance practices. GAO also recommends that DOD take steps to ensure its officials consistently interpret export control regulations. State, Commerce, and DOD concurred with the recommendations.
gao_GAO-20-136
gao_GAO-20-136_0
Background Historically, unmanned aircraft have been known by many names including: “drones,” remotely piloted vehicles, unmanned aerial vehicles, and models. Today, the term UAS is generally used to emphasize the fact that separate system components are required to support airborne operations without a pilot onboard the aircraft. UAS Users and Uses Recreational users have flown UAS—largely model aircraft—for years with minimal FAA interaction. Increasingly though, more technically advanced UAS are being used in a variety of ways by different types of users. Certain industries are interested in expanding the allowable uses for UAS, such as expanding use of UAS in controlled airspace. Expanding allowable uses would likely require more FAA involvement and regulatory action. UAS operators generally fall into the following categories: Recreational users operate UAS primarily for recreational or educational purposes, such as operating UAS to take photographs or video for personal use. To operate UAS recreationally, a user must obtain a certificate of registration from the FAA. The certification constitutes registration for all unmanned aircraft owned by the individual and operated recreationally. Commercial users operate UAS in connection with a business. Examples of commercial uses include: selling photos or videos taken from UAS (such as wedding or real estate photography); conducting mapping or land surveys; or conducting factory or equipment inspections. Commercial users must register each UAS used for commercial purposes with the FAA. Public safety/government users operate UAS in a variety of ways to support key activities of their mission. For example, firefighters use UAS to help put out fires and the Department of the Interior uses UAS to survey national parks. Public safety and government users must either register each UAS or receive an FAA certificate of authorization to function as a public aircraft operator. FAA Roles and Responsibilities Related to UAS FAA is the primary agency responsible for facilitating the safe integration of UAS into the national airspace. All airspace is regulated, and FAA’s rules regarding access to the airspace apply to the entire national- airspace system, from the ground up, though there are different rules for different types of airspace. As UAS increasingly enter and operate within the national airspace system—a complex network of airports, aircraft, air- traffic-control facilities, employees, and pilots—it is FAA’s responsibility to plan for and oversee the integration of UAS into both low-altitude airspace (below 400 feet) and, eventually, higher altitude airspace that will be shared with other aircraft. According to FAA’s Fiscal Year 2019 Implementation Plan, the ultimate goal of integration is for UAS to operate harmoniously with manned aircraft, in the same airspace, while ensuring the safety of people and property both in the air and on the ground. Within FAA’s Office of Aviation Safety, the UAS Integration Office is responsible for facilitating the safe, efficient, and timely integration of UAS into the national airspace system; aligning UAS international activities with foreign civil-aviation authorities; supporting standards and policy development related to UAS projects; and providing strategic planning and support for continuous UAS research and development. The Office was established in fiscal year 2017 and, in fiscal year 2018, had 39 full- time equivalent employees. Other offices within FAA coordinate with the UAS Integration Office on UAS-related activities. For example, FAA’s Office of Rulemaking (also under the Office of Aviation Safety) oversees the rulemaking process, including issuing notices of proposed rulemaking and administering the public comment process, in addition to providing general rule information on published regulatory documents. Other offices are also involved in the development of proposed rules, certification of aircraft, compliance and enforcement, and other activities related to UAS integration according to their subject-matter expertise. For example, the Flight Standards Service is responsible for setting standards for unmanned aircraft, and the Aircraft Certification Service is responsible for certifying new UAS designs and approving UAS for advanced operations. Additionally, the Air Traffic Organization is responsible for providing data and information to facilitate the operation of approved UAS near airports. Figure 1 shows FAA offices that are involved in UAS integration efforts. FAA Funding Structure FAA’s activities are primarily funded through revenues to the Airport and Airway Trust Fund (Trust Fund), which is funded through a variety of excise taxes paid by users of manned aircraft as well as interest revenue accrued on the balance of the Trust Fund. These excise taxes are levied on the purchase of airline tickets and aviation fuel, as well as the shipment of cargo, though, as we have previously found, they are generally not closely linked to FAA’s costs for the services received. Trust fund revenues are available to FAA subject to appropriations. In addition to these revenues, a portion of FAA’s funding is often appropriated from general revenues. The Trust Fund provides funding for FAA’s three capital accounts: 1. the Facilities and Equipment account, which funds technological improvements to the air-traffic-control system, including the modernization of the air-traffic-control system called the Next Generation Air Transportation System (NextGen); 2. the Research, Engineering, and Development account, which funds research on issues related to aviation safety, mobility, and NextGen technologies; and 3. the Airport Improvement Program, which provides grants for airport planning and development. The Trust Fund also provides much of the funding for FAA’s Operations account, which funds the operation of the air traffic control system and the UAS Integration Office, among other activities. User Fees In general, a user fee is related to some voluntary transaction or request for government goods or services above and beyond what is normally available to the public, such as entrance into national parks, a request that a public agency permit an applicant to practice law or run a broadcast station, or the purchase of maps or other government publications. User fees are normally related to the cost of the goods or services provided. User fees’ designs can vary widely. We have previously reported that the way user fees are set and collected can affect the extent to which the goals of implementing user fees—equity, efficiency, revenue adequacy, and minimal administrative burden—are achieved. In 2017 the Drone Advisory Committee (DAC)—an industry stakeholder group established by FAA to provide advice on key UAS integration issues—created Task Group 3 to make recommendations related to funding the integration of UAS into the national airspace system. The group completed an interim report on short-term funding options in July 2017 and a final report on longer-term funding options in March 2018. The final report identifies various funding mechanisms for further study and recommends that industry, the FAA, and Congress work together to identify long-term funding sources for FAA’s UAS activities. In 2019, the FAA reconvened the DAC and plans to continue to form task groups to study emerging issues in the UAS industry, though no new task groups have been formed related to UAS funding. FAA Has Undertaken and Planned Activities to Incrementally Expand the Use and Types of UAS in the National Airspace System FAA has leveraged its existing regulatory and oversight framework for UAS integration, with the goal of allowing UAS operators to achieve increasingly complex operational capabilities. For example, FAA is applying existing regulations and standards developed for manned aviation to allow for more complex UAS operations. FAA has also initiated rulemaking efforts to allow operations of small UAS at night and over people in certain conditions and has identified additional areas for potential future UAS integration activities. For some capabilities, FAA has also identified a need for research and development, including for systems that would enable UAS to detect and avoid other aircraft and hazards. To help address these needs, FAA has established programs to draw on private industry’s resources and state and local governments, including the provision of air navigation services. Longer term, however, the extent of activities needed to carry out FAA’s statutory role in the operation, oversight, and enforcement of established rules and systems related to UAS is still unclear. FAA Has Leveraged Existing Manned Aviation Regulatory and Oversight Framework for UAS Integration According to FAA officials, just as the ultimate vision for UAS integration is for manned and unmanned aircraft to operate in the same airspace, FAA’s overarching strategy is to integrate UAS into its existing regulatory structure. This strategy is based in an incremental, risk-based approach to developing rules, policies, and procedures for UAS and leverages standards and regulations established for manned aviation as well as existing FAA resources such as rulemaking, flight standards, and aircraft certification personnel. To organize and track UAS integration activities across the agency, FAA has published internal annual-implementation plans for fiscal years 2017– 2019. FAA adjusts the plans annually to reflect changes in policy. These plans describe the range of objectives related to expanding the use and types of UAS in the national airspace that FAA has identified and its plan for achieving these objectives. For instance, the implementation plan includes identification of the steps needed to achieve each operational capability, including development of related regulations, policies, and standards. In recent years, FAA has implemented regulations that allow for routine UAS operations of gradually increasing risk and complexity. To date, FAA has established requirements for aircraft and operator registration as well as regulations to allow for limited operations of small UAS, including the June 2016 Small UAS rulemaking (commonly called Part 107), which established conditions under which small UAS operators are allowed to routinely fly for largely commercial purposes (see fig. 2). Additionally, for those operations not allowed under established regulations, FAA may grant waivers on a case-by-case basis. According to FAA, nearly 14,000 requests for waivers were received as of December 2018, with just over 2,000 of those requests approved. The Flight Standards Service has issued waivers for some UAS operators— including commercial and government users—to operate beyond-visual- line-of-sight or at night for purposes including inspection of hurricane damage and aerial photography. As FAA develops and implements regulations for more and more complex operations, fewer types of operations will require these waivers. Since issuance of the Part 107 rule, FAA has continued its efforts to increasingly allow for routine operations (that is, operations within established regulations that do not require waivers) of more types of UAS (including large UAS) under more conditions, as well as more complex UAS operations. Figure 3 illustrates some of the ongoing and potential future operational capabilities included in FAA’s phased approach for UAS integration, which are detailed below. FAA’s current efforts to allow for more complex UAS operations include the following ongoing rulemaking efforts: Operation of small unmanned aircraft systems over people: FAA issued a proposed rule in February 2019 to expand the operations permitted under the Part 107 rulemaking to allow operations over people and at night in certain conditions. Safe and secure operations of UAS advance notice of proposed rulemaking: FAA released this advance notice in February 2019 to seek public comment on whether FAA should promulgate new rulemaking related to, for instance, additional operating and performance requirements for UAS. Remote identification of unmanned aircraft systems (UAS): Both FAA and stakeholders have identified the ability for FAA, law enforcement agencies, and other UAS users to remotely identify UAS while in flight as foundational to most other rules and system development. FAA currently expects to issue a proposed rulemaking on remote identification in December 2019. With respect to the operation of UAS over people rulemaking, FAA expressly stated that it does not intend to finalize proposed rules in that area until it has issued a final rule on remote identification. In its internal Fiscal Year 2019 Implementation Plan, FAA identified a variety of new types of operations that could be enabled in the next few years. Examples include: Beyond visual-line-of-sight operations: Future integration efforts in this area could allow for low-altitude UAS operations beyond-visual- line-of-sight, such as infrastructure and agricultural inspections primarily below 400 feet. Small-cargo delivery operations: Future integration efforts in this area could allow for delivery of small cargo by networks of small UAS flying at low altitudes in rural and urban areas predominantly below 400 feet. Currently, FAA certifies some UAS operators to enable them to conduct cargo delivery operations under existing air carrier certification regulations. Urban air-mobility passenger operations: Future integration efforts in this area could allow for on-demand, highly automated, passenger air transportation services within and around a metropolitan environment with no pilot physically in the cockpit of the aircraft. These operations are expected to use UAS weighing thousands of pounds that would fly at higher altitudes (500-5,000 feet). UAS operators are currently developing UAS for future passenger transport operations both in the United States and abroad. Large cargo and inspection operations: Future integration efforts in this area could allow for cargo and inspection operations using significantly larger UAS (up to tens of thousands of pounds) operating in controlled airspace at higher altitudes. These UAS are expected to operate similarly to large commercial manned aircraft. These larger UAS may allow the transportation of larger volumes of cargo or execution of inspections over a longer range. Currently, FAA has approved—on a case-by-case basis—limited experimental operation of large UAS to conduct inspections by waiver. FAA’s annual UAS implementation plans reflect the ever-changing nature of the UAS industry, the regulatory environment, and concerns identified by stakeholders from within and outside of government related to public safety and national security. According to FAA, as UAS technology and the industry continue to evolve, additional operational capabilities and associated integration needs will be identified. FAA expects efforts to allow increasingly complex operations to build on lessons learned and technology improvements gained from preceding integration efforts. Until new regulations can be issued for these operations, FAA plans to extend and adjust existing safety standards and requirements—originally designed for manned aircraft—to UAS through waivers and exemptions. For example, in April 2019, FAA awarded the first air carrier certification to a UAS delivery company, Wing Aviation. This certification—under existing regulations for manned air carriers—allows the company to begin commercial package delivery in Blacksburg, Virginia. FAA Aims To Leverage Both Federal and Non- Federal Resources for the Research and Development of UAS Systems and Technologies As discussed in its internal implementation plan, FAA has identified research and development needed to inform the safe expansion of UAS operational capabilities. According to FAA officials, this research focuses on the assessment of risks that UAS integration poses to the national airspace as well as the characteristics required for technology and systems to sufficiently mitigate these risks to achieve the safe implementation of more complex UAS operations. Such systems and technology would enable, for example, detection and avoidance of other aircraft and hazards, reliable navigation capability, and reliable data linkage between the UAS aircraft and the operator for controlling the flight. To that end, FAA coordinates UAS-related research activities being conducted by FAA, other government agencies, and FAA’s partners in industry and academia. For example, FAA has coordinated with NASA to develop a traffic management concept for UAS. Additionally, FAA has implemented two programs—the Test Sites program and Integration Pilot Program—to leverage private industry resources and state and local governments to conduct research and development activities needed to achieve full UAS integration. Test Sites Program: FAA authorized seven test site locations between 2013 and 2016 as directed by statute, at which industry stakeholders can test UAS technologies to further UAS integration. According to a test site participant, these sites have been used, for example, to test technologies such as vertical take-off and landing technology for large UAS, which may be relevant for large-cargo and passenger operations. Integration Pilot Program: This pilot was established in 2017 to enable testing of UAS integration technologies in state, local, and tribal jurisdictions. Through this program, for example, the North Carolina Department of Transportation has partnered with private industry to provide UAS medical-package delivery services (such as the transport of medical test samples). The program’s objectives include: testing and evaluating models of state, local, and tribal government involvement in the development and enforcement of federal UAS regulations, encouraging the development and testing of new UAS concepts of informing further FAA regulation of UAS. As these research efforts make headway, FAA plans to leverage the results to develop a system to provide UAS traffic management services. As stated in FAA’s Fiscal Year 2019 Implementation Plan, on any given day, 60,000 commercial aircraft fly through the national airspace into the 30 biggest airports in the United States and—given current trends—the same number of UAS flights could originate from just one delivery fulfilment center in a major city in a single day. According to FAA, in order to fully integrate commercial UAS into the national airspace, a traffic- management ecosystem complementary to—but separate from—FAA’s air-traffic-management system for manned aviation will likely be needed to control access and flight operations in low-altitude airspace. FAA has identified capabilities required for low-altitude UAS air navigation. One system—the Low Altitude Authorization and Notification Capability (LAANC)—has been implemented, while a UAS traffic management system is still under development. According to FAA and stakeholders we spoke to, LAANC was the first step towards a UAS traffic management system. LAANC: Through 2017 and 2018, FAA established technical and regulatory requirements for private partners to provide LAANC services, which enable UAS to access controlled airspace near approved airports. After deploying a system prototype in November 2017, FAA launched LAANC in April 2018 and then expanded the program to include additional partners in October 2018. Under LAANC, FAA provides data on temporary flight restrictions, notices, and airspace maps of participating facilities through a UAS data exchange. Private companies that have been approved by FAA to provide UAS air navigation services (called UAS service suppliers) develop and maintain—with private funding—automated applications or portals. Approved service suppliers provide differing services, with varying infrastructure and associated costs to provide the service. For example, some suppliers provide LAANC services to UAS operators among the general public, while others process applications for airspace access only for certain UAS operators. Prior to operating in controlled airspace near airports, UAS operators use these applications or portals to apply for airspace authorizations. These requests are checked against the data provided through the UAS data exchange, and if approved, UAS operators receive authorization to fly in the area—within minutes, in some cases. LAANC services were previously available only to commercial operators, but in July 2019, LAANC access was extended to recreational operators. UAS traffic management capability: In 2013, NASA began developing a concept of operations for a UAS traffic management system, which is the proposed system for providing UAS air navigation services in low-altitude airspace. As envisioned by FAA, these services will be separate, but complementary, to those provided by the Air Traffic Control system used for manned aviation. FAA established a pilot program in 2017 to develop and demonstrate early versions of UAS traffic management operations. Much like LAANC, the component applications and infrastructure supporting the traffic management system would be almost entirely developed, owned, and operated by private UAS service suppliers; only the Flight Information Management System—a data exchange gateway—is planned to be built and operated by FAA. The current UAS Traffic Management Concept of Operations envisions that UAS operators will share the timing and destination of a planned flight through a UAS service supplier. FAA envisions that these service suppliers will provide near real-time advisories to affected UAS operators regarding traffic (aircraft in the area), weather and winds, and other hazards pertinent to low-altitude flight (such as cranes or power-line construction or local UAS restrictions). Figure 4 illustrates the UAS traffic management system as outlined in the concept of operations. FAA has not identified an implementation date for the traffic management system. Rather, FAA proposes a “spiral development,” in which low complexity operations would be implemented first, with higher complexity operations and requirements built in incrementally. FAA intends to allow each new development to gradually mature the UAS traffic management system to ultimately support the full range of UAS operations at low altitude. Among other FAA activities, remote identification rules will be key to implementation of traffic management capabilities. FAA’s Role Will Likely Evolve as UAS Integration Progresses Once FAA has developed the foundational UAS rules and systems such that operational capabilities of UAS integration have been substantially achieved, the specific nature of FAA’s role in the operation, oversight, and enforcement of established rules and systems depends on the nature of the established regulations and systems. FAA’s mission to ensure the safety of the national airspace, however, makes it clear that FAA will continue to play a role in each of these areas, given its responsibility for maintaining the safety of the national airspace. For example, FAA will need to continue conducting oversight to ensure compliance with established regulations, policies, and standards to maintain the safety of the national airspace, but the precise nature of the oversight needed in the future will depend on the regulations and systems established. We recently found that local law enforcement agencies may be unclear about their role in UAS enforcement and that most FAA inspectors and local law enforcement agencies GAO met with said that officers may not know how to respond to UAS incidents or what information to share with FAA. Similarly, a recent industry task force commissioned to address the issue of unauthorized UAS near airports found that the role of state and local law enforcement in addressing that threat is unclear, and recommended that federal agencies clearly define related roles, responsibilities, and authorities. As such, FAA’s activities related to enforcement for UAS will likely evolve as UAS become more integrated in the national airspace. Further, according to our interviews with stakeholders, facilities designated for the take-off and landing of UAS for the transport of passengers and cargo as well as other infrastructure to support UAS air navigation services may be needed. FAA’s role in operating or overseeing this infrastructure will likely hinge on the nature of the infrastructure. For example, while FAA’s Office of Airports has responsibility for airport safety and inspections as well as establishing standards for airport design, construction, and operation, the extent to which this type of oversight will be needed for infrastructure to facilitate drone operations is not yet known. FAA Tracks Some Current UAS-Related Costs but Does Not Have a Process to Ensure Cost Information Is Complete FAA Allocates Appropriated Funds for UAS Activities Based on Congressional Direction FAA receives annual appropriations in four accounts, and since 2016, conference reports accompanying appropriations have directed FAA to allocate some funding from these accounts specifically for UAS-related activities. Table 1 depicts appropriations FAA has allocated to UAS- related activities from these four accounts since 2016 at the direction of Congress. FAA allocates portions of its appropriations for the UAS Integration Office and some other UAS-specific activities based on congressional direction, but FAA may obligate funding that has not specifically been allocated for UAS activities to support UAS activities as well. The vast majority of FAA’s appropriation comes from the Airport and Airway Trust Fund (which is funded through revenues of taxes and fees on manned aviation airline tickets, aviation fuel, and cargo shipments), including all of the appropriations for the facilities and equipment; research, engineering, and development; and grants-in-aid for airports accounts. In fiscal year 2018, about 92 percent of FAA’s approximately $17 billion in total funding was appropriated from the Trust Fund. The remainder of FAA’s funding is appropriated from general revenues. For fiscal year 2018, in accordance with congressional direction, FAA allocated a total of $104.8 million for UAS-related activities and, according to FAA financial data, obligated approximately $69.7 million for these activities. Table 2 provides an overview of the UAS-related activities for which FAA determined it had obligated funds in fiscal year 2018; a more detailed list of UAS-related activities for which FAA identified fiscal year 2018 obligations is provided in appendix 2. Individual activities may be funded through more than one account, depending on their scope. According to officials, and as discussed below, FAA staff outside of the Office of Aviation Safety and Air Traffic Organization may not consistently track their UAS-related obligations. As such, the obligation amounts identified in table 2 may be incomplete and may not represent FAA’s total fiscal year 2018 UAS costs. Within the categories above, specific examples of activities funded in fiscal year 2018 include: About $3.7 million from both the Operations ($2.07 million) and Facilities and Equipment ($1.65 million) accounts for the development of LAANC systems and requirements. Of the about $33 million obligated by the Office of Aviation Safety in fiscal year 2018 for UAS-related activities, about $28 million was obligated by the UAS Integration Office and $166,000 by the Office of Rulemaking. $4.5 million obligated under facilities and equipment for the development of a UAS traffic management system and the associated Flight Information Management System. FAA Efforts to Track UAS Costs May Result in Incomplete Data Since 2017, FAA has been tracking costs associated with many of its UAS activities including time spent by staff as well as other costs, as shown in table 2. A December 2017 internal memorandum instructed FAA offices to track UAS-related activities and costs using project codes. According to FAA officials, the codes are used to identify travel, procurement, time and attendance, and costs related to special events, among other UAS-related activities. The effort was intended to address the administration’s and Congress’ interest in greater cost visibility. According to FAA officials, the project codes to track UAS costs have been implemented in the Office of Aviation Safety—including the UAS Integration Office, Flight Standards Service, and Office of Rulemaking— and staff within the Air Traffic Organization (not including air traffic controllers). According to FAA officials and as demonstrated by the obligations shown in Table 2, the Office of Aviation Safety and the Air Traffic Organization represent the majority of UAS costs for fiscal year 2018 within the Operations account. In addition, according to FAA, because Conference Reports have outlined how activities in the Research, Engineering and Development and Facilities and Equipment accounts should be funded by line item, FAA is able to track these costs without using the project code method. While FAA has started tracking UAS-related costs for some offices, FAA does not know the extent to which UAS costs are tracked throughout the agency, resulting in data that may be incomplete. Many—if not all—FAA offices are doing work related to both manned aviation and UAS, but FAA officials stated that they do not know or plan to assess the extent to which staff in other offices—such as the Office of the Chief Counsel—that spend time on both UAS-activities and other responsibilities are using the project codes to track their UAS-activities. FAA officials stated that, because the bulk of the UAS-related work is being conducted within the Office of Aviation Safety and the Air Traffic Organization, it is not a priority to try to identify the time spent by other offices working on UAS-related activities, which they believe would be time consuming. However, with no way to assess the extent to which the project codes have been implemented, FAA is unable to tell whether it has met the intent of using the codes: greater visibility into UAS-related costs. For instance, FAA does not currently have visibility via the project codes into time spent on UAS activities outside of the Office of Aviation Safety and the Air Traffic Organization. According to OMB instructions to agencies on financial-reporting requirements and standards for federal financial accounting, agencies should report the full cost of each program—to include both direct and indirect costs and the costs of identifiable supporting services provided by other offices within the agency. Further, federal standards for internal control note that agencies should use quality information—that is, data that are complete and accurate—to achieve objectives, make informed decisions, and manage risks. With no assurance that the project codes are resulting in information that is complete, FAA risks making decisions based on information that is unreliable for the purpose of understanding the full costs of its UAS activities. Efforts to track costs need not be overly complex: federal financial-accounting standards note that agencies should consider the precision desired and needed in cost information and the practicality of data collection and processing, among other considerations, when designing cost-accounting processes. For example, FAA could build on its existing project codes for UAS-related activities by monitoring the extent to which the project codes have been used agency- wide. Alternatively, FAA may identify other methods of accounting for UAS-related costs, if there are some costs not easily tracked using the project codes. Further, indirect costs associated with FAA management and facilities could be assigned to the UAS mission based on more complete information on the direct costs identified through use of the project codes. Additionally, as discussed below, many of FAA’s future costs related to UAS are unknown. Ensuring the project code information is complete and accurate now could put FAA in a better position to identify those costs as they are realized in the future. Further, federal standards for internal control state that management should identify, analyze, and respond to significant changes that could affect an agency’s ability to report reliable information and achieve objectives—such as a change in mission that influences costs. Without reliable information on FAA’s UAS-related costs, the administration and FAA may be less equipped to make informed policy decisions regarding resources needed as UAS become further integrated into the national airspace and as UAS oversight becomes an increasing part of FAA’s mission. FAA’s Future Costs Are Unknown Due to the Evolving Nature of the Industry Because the UAS industry, as well as key systems and technological developments, continue to evolve, it is too early to know what costs related to UAS that FAA is likely to incur in the future . This holds true for future operational costs as well as the costs to develop future systems and regulations and indirect costs. According to FAA and stakeholders we spoke to, in addition to costs to continue regulatory activities and safety oversight, FAA’s future costs will depend on the extent of FAA’s involvement in the everyday operation and oversight of systems, such as those related to UAS traffic management, and the extent to which FAA becomes a provider of UAS-related services. Examples of how FAA’s costs could evolve and possibly expand in each of these areas include: Regulatory development costs: Current costs for activities such as the development of new UAS regulations by the UAS Integration Office could change as UAS become more integrated into the national airspace. As previously discussed, the industry is changing rapidly and new uses for UAS are being developed, uses that will require additional FAA regulation and oversight. FAA cannot know the extent to which additional rulemaking activities will be required for UAS technologies and uses that the industry has not yet contemplated or developed. Costs to develop regulations involve input from offices across FAA, such as the Office of the Chief Counsel, where FAA officials are unsure if staff are consistently using the project codes to track their costs for UAS-related activities. As such, FAA may not have visibility into the extent to which these UAS-related costs may change over time. Safety oversight costs: As part of its safety mandate, FAA is responsible for enforcing compliance with established regulations for both manned aircraft and UAS. Several offices within FAA have a role in UAS compliance and enforcement, including the Flight Standards Service and the Office of Security and Hazardous Materials Safety. As we have recently reported, while FAA has sole responsibility for enforcement of UAS regulations, the agency has focused on engaging and educating law enforcement and public safety agencies at all levels—federal, state, and local—and, to a lesser extent, conducting surveillance to ensure compliance with UAS regulations. While local law enforcement agencies may often be in the best position to deter or respond to UAS incidents, they may not have information on how to respond or what information to share with FAA. According to FAA officials, the Office of Security and Hazardous Materials Safety is one of the offices in which FAA officials do not know if staff are tracking their activities and costs related to UAS through use of the project codes discussed above. Given the uncertainty about the division of responsibilities between federal, state, and local law enforcement, it is unclear how costs for safety oversight and enforcement will evolve and possibly expand in the future. Provision and oversight of UAS services and facilities: FAA will eventually incur costs related to providing air navigation and other services to UAS operators, oversight of UAS service providers, and potential infrastructure, but the extent of FAA’s eventual role in the provision of these services and related oversight is not yet understood, in part, because the industry is still evolving and it is unclear what FAA services will be provided in the future. Some stakeholders believe that an increased industry role in providing air navigation services could keep FAA’s costs for these activities relatively low. For example, the UAS Traffic Management Concept of Operations envisions that leveraging private entities to provide a variety of UAS traffic management services will reduce the infrastructure and manpower burden on FAA and, thus, reduce associated costs. FAA envisions that the Flight Information Management System—a system through which FAA can provide directives and enable information exchange between UAS service suppliers, UAS operators, and FAA—is the component of the UAS traffic management system that FAA will build and manage. FAA has not yet estimated the costs of developing or implementing this system because, according to FAA officials, the agency is still many steps away from developing the core infrastructure and regulatory requirements. As UAS integration progresses and as more UAS are operating in the same airspace as manned aircraft, additional solutions may be needed to manage UAS traffic at higher altitudes, which will also incur costs. For instance, FAA anticipates that air traffic controllers will have a role in de- conflicting manned aircraft and unmanned aircraft and could provide air-traffic-control services to UAS in controlled airspace. FAA officials stated it will be necessary to collect data on the direct and indirect costs of UAS for air-traffic-control services in the future. According to FAA, a new air-traffic-control-cost-allocation study is underway, but FAA does not currently have the information on UAS operations that would be necessary to assign air traffic control costs to UAS users. Beyond system development, once traffic management systems are designed and operational, FAA will incur costs related to its role in overseeing providers of UAS traffic management services as well as operating and maintaining the Flight Information Management System. FAA currently provides UAS operators with services related to registration, aircraft certification, and waivers for operation that fall outside existing regulations, but those services may change depending on future rulemaking. When it becomes clearer what services FAA will likely provide and how it will provide those services, FAA will be better positioned to estimate its costs to inform its budget requests and plan for the future, as it has done for systems that have already been implemented. For example, FAA has estimated future costs associated with the LAANC program, which was implemented in 2018. FAA anticipates obligating approximately $35.64 million from the facilities and equipment account and $26.6 million from the operations account to further develop and operate the LAANC system from fiscal years 2019 through 2023, as shown in table 3. Indirect Costs: In addition to direct costs related to rulemaking, oversight, and provision of services, FAA will continue to incur indirect costs such as those associated with the operation and maintenance of FAA facilities and systems. FAA officials said they do not plan to conduct analysis through which they could allocate indirect costs for UAS, because FAA’s appropriations and funding structure do not require them to track costs in this way. However, as previously discussed, OMB instructions to agencies on financial-reporting requirements state that agencies should report the full cost of each program including indirect costs. As discussed, FAA’s efforts to track costs related to UAS activities may result in incomplete data, and as the UAS industry evolves and becomes more integrated, tracking costs may become even more complex. Generally, FAA officials stated that differentiating between costs related to UAS and manned aviation will not be necessary as UAS become further integrated into the national airspace and FAA’s mission because the agency does not track costs in this way for any other mission areas. However, as discussed later in this report, there is widespread consensus among manned and unmanned aviation industry stakeholders that UAS costs should be borne by the UAS industry rather than the manned aviation industry, and policy makers may opt to recover these costs through user fees or some other mechanism in the future. As discussed below, should FAA and Congress decide that certain fee mechanisms should be pursued, a reliable accounting of total program cost—including indirect costs—is important to setting effective fees, as our prior work related to designing user fees has shown. Planning and Consideration of Policy Goals Are Key to Designing UAS Fee Mechanisms Considerations for Determining How to Set and Collect Fees In the tasking statement to the Drone Advisory Committee’s Task Group 3, FAA asked the committee to recommend options for funding the activities and services required by both government and industry to safely integrate UAS into the national airspace system. The Task Group concluded in its final report that the aviation industry, FAA, and Congress should coordinate to identify one or more revenue streams that are separate and segregated from the Airport and Airway Trust Fund to help fund FAA’s UAS-related activities. The Task Group also identified five different fee mechanisms through which FAA could recover some of the costs of its activities from UAS users, a topic we discuss in this section. The extent to which costs are recovered from UAS users and the methods by which costs are recovered are policy decisions for the administration and Congress. Since 2015, FAA has used one fee mechanism—a $5 registration fee, the same as the fee to register a manned aircraft—to recover some of the costs associated with administering the UAS registration requirement. Most of FAA’s UAS- related costs are in areas unrelated to UAS registration. As such, policy makers may, at some point, consider additional ways to recover the costs of UAS activities, including implementing user fees for additional services and activities, subject to congressional authority to implement fees and use resulting revenue. Our prior work on designing user fees, combined with policies established by the Office of Management and Budget, can provide a framework for designing user fees that reduce the burden on taxpayers to finance FAA’s UAS activities, which benefit specific users. The goals of establishing user fees—efficiency, equity, revenue adequacy, and reducing administrative burden—can be in conflict with each other and necessitate trade-offs depending on policy priorities. Table 4 describes these goals. Our prior work illustrates that four key design elements—namely how fees are (1) set, (2) collected, (3) used, and (4) reviewed—require careful consideration and planning to achieve the desired goals. Based on the prospective nature of user fees to recover FAA’s UAS-related costs, we will focus on how user fees are set and collected. It is important to note that given the tradeoffs involved in establishing user fees, different users and stakeholders may have varying perspectives and opinions on what would be an appropriate fee structure. As these are policy decisions, this report does not recommend any specific fee mechanism. Instead, the considerations and examples we present are intended to inform decision- making by laying out issues to take into account when designing user fees. As discussed in our User Fee Design Guide, determining how UAS user fees should be set and collected involves a number of steps. These steps include: identifying the costs associated with each activity and which costs should be recovered, identifying the beneficiaries of each activity, determining how to set fees for various types of beneficiaries, determining how fees should be collected, and determining when it is appropriate to begin collecting fees. Identify Costs and Which to Recover OMB instructions on designing user fees state that user fees should be sufficient to recover the full cost of providing each service or resource, including indirect costs, except to the extent that agencies determine that exceptions should be made. Identifying the full costs of providing a UAS service or resource—such as providing access to maps and air-traffic management services like LAANC—could enable policy makers to determine, consistent with their policy goals, which of those costs should be recovered through user fees. Identify the costs of each activity: Our prior work has found that, to set fees so that total collections cover the intended share of program costs, a reliable accounting of total program cost is important. As previously discussed, while the costs of some current regulatory and operational activities related to UAS are known, some current and most future costs are unclear. Recognizing that generating and maintaining reliable cost data can be expensive, OMB instructions note that program cost should be determined or estimated from the best available records of the agency. Accordingly, policy makers could opt to implement fees to recover the estimated costs of each activity as regulations, services, and systems are established, and adjust fees periodically based on actual costs. Determine which costs to recover: The next step is to determine the extent of the costs for each activity that should be recovered through user fees based on policy goals. For example, as discussed, many of FAA’s current costs relate to the “setup” or integration of UAS into the national airspace, including the costs to develop and promulgate UAS operational rules. Policy makers may or may not decide to recover these current costs from future users. For example, policy makers may decide not to recover these costs based on the idea that the goal of promulgating UAS-related regulations may be related to the general safety of the airspace, rather than providing benefits to specific users. Additionally, some stakeholders we interviewed stated that the costs of startup activities (like rulemaking) and safety oversight activities (like enforcing existing regulations) should not be recovered through user fees because these activities are core government functions. Rather, these stakeholders advocated funding such activities through appropriations from general revenues. However, as we have discussed in prior work, fees have frequently been used to support agencies’ regulatory programs. For example, fees assessed by financial regulatory agencies and the Nuclear Regulatory Commission on their respective regulated industries are used to support those agencies’ regulatory activities. Identify Beneficiaries Our prior work has found that the extent to which a program is funded by user fees should generally be guided by who primarily benefits from the program, though the extent to which a program benefits specific users or the general public is not usually clear cut. The beneficiaries of FAA’s UAS-related activities will include both direct users (UAS operators) as well as indirect beneficiaries such as the general public. Direct beneficiaries will accrue benefits from their use of UAS, whether recreational, governmental, or commercial. In contrast, indirect beneficiaries would benefit from maintaining a safe national airspace system and preventing disruption of commercial flights and other manned aviation. Policy makers may decide that, to account for benefits to those who don’t directly engage in UAS activities, a percentage of FAA’s UAS- related costs should be funded with general revenues. For instance, as the Congressional Research Service has reported, there has been general acceptance that appropriations to the FAA from general revenues account for the public benefits of FAA’s regulation of the national airspace. Additionally, while the manned aviation industry will benefit from regulations and oversight that reduce the potential for disruption in the airspace caused by UAS, UAS operators benefit from the regulation and safety oversight of the manned aviation industry as well. Policy makers may choose to account for these benefits in any number of different ways, depending on the perceived extent of the benefit enjoyed by each group. Direct beneficiaries—including recreational, commercial, and governmental UAS operators—will benefit in different ways based on both the type of user and the type of use or activities they engage in. For example, recreational users may experience the joy and excitement of flying UAS, but are not authorized to accrue any economic benefits. In contrast, commercial users are operating UAS with the explicit goal of earning revenue or benefiting business interests in some other way as a result of their UAS operations. Determine How to Set Fees for Beneficiaries/Users As outlined in our prior work, policy makers may set fees for different types of users and activities based on a variety of factors including (1) costs imposed on the system by each user or type of use, (2) the extent of benefits received by different types of users, (3) the ability of each user to pay, and (4) identified policy goals. Figure 5 presents a simplified, hypothetical example of setting fees for various activities and users. The following examples illustrate how these various factors could play out: Considering costs imposed: Policy makers may set fees to recover the costs imposed by UAS users requiring air navigation services—for example, those operating in controlled airspace (such as around airports) or in high traffic areas. Policy makers may set fees to account for the different costs imposed by providing different UAS users access to air traffic services, such as charging per flight for air navigation services or basing the fee on distance traveled in controlled airspace. Policy makers may decide that recreational UAS users should pay lower fees than commercial users because they may generally impose fewer costs on FAA. Considering Benefits Received: Policy makers may set fees for some services that account for the extent of the benefit received, such as charging for air navigation services based on value of cargo or number of passengers transported. Considering ability to pay: Policy makers may decide to allocate a larger share of FAA’s UAS-related indirect costs to commercial users, based on their ability to pay and the monetary benefits they receive. Considering policy goals: Policy makers may decide that public safety agencies (government users), such as local police departments, should be exempt from fees or pay reduced fees because their use of UAS may provide a public benefit. Policy makers may seek to increase safety by reducing or eliminating fees for certain services in order to reduce the probability that users may not comply with requirements to avoid paying an associated fee. This determination would require balancing the potential revenue associated with the fee against (1) the potential costs of ensuring compliance with operational requirements and fees through enforcement activities and (2) the safety risks associated with the portion of operators who may try to avoid fees through not complying with operational requirements. Most stakeholders we spoke to agreed that UAS users should pay a fee when they receive a service from FAA but that fees should be related to the costs incurred by use. In discussing whether distinctions should be made in setting fees based on factors like commercial or recreational status, cargo or passenger flights, size and weight of the aircraft, and intended use of airspace, most stakeholders agreed that fees should be charged based on these distinctions only insofar as they are associated with different costs imposed on UAS-related systems or FAA. Based on the evolving nature of the industry, it is unclear whether distinctions like those above would be related to differences in costs imposed on FAA. Some other countries have implemented user fees to recover the costs associated with UAS-integration and air navigation services, though integration is still in progress. For example, Transport Canada (the Canadian agency responsible for developing transportation regulations, policies, and programs) has established a regulatory structure requiring UAS pilot certification and UAS registration. It set fees to recover Transport Canada’s costs for administering those requirements: $5 Canadian dollars (CAD) for registration (similar to FAA’s registration requirement), $10 CAD for a basic pilot certification, and $25 CAD for certification to perform advanced operations, such as flying in controlled airspace. NAV CANADA (Canada’s private, non-profit air navigation service provider) is in the process of establishing a LAANC-like service through a third-party but has not yet determined whether or how NAV CANADA may seek to recover these costs. In another example, officials told us that the Swiss Federal Office of Civil Aviation is required to recover its costs, so their general philosophy will be to charge a fee whenever costs are incurred. The regulatory structure is still under development, but the office currently charges UAS users for the time required to issue waivers for UAS operations. For example: For certain operations, such as those within visual-line-of-sight and not over people, no authorization is required, and thus no fee is required. For advanced operations—such as those beyond visual line of sight or over people—fees are charged based on the time required to conduct analysis and risk assessment up to a maximum of 5000 Swiss Francs. Determine How Fees Should Be Collected Policy makers can identify opportunities to collect fees based on the characteristics and requirements of relevant aviation navigation and other systems as these systems are developed. OMB instructions to agencies related to user fees state that fees should be collected prior to or at the time a service is provided unless agencies are legally authorized to collect fees after the service has been provided. Our prior work has found that collecting fees at the time a service is provided may reduce the administrative burden. Here, for example, the UAS traffic management system may include points in the process when users are required to obtain an FAA authorization or notify FAA or UAS traffic-management service providers of operation requirements. Those points may provide an opportunity for fee collection. Similarly, as FAA does for current UAS registration fees, online systems for other services could provide an opportunity for FAA to collect fees associated with those activities. Alternatively, fees could be collected through a third party to reduce the administrative burden on FAA. For example, if UAS passengers are subject to fees, flight operators could collect those fees on behalf of FAA, as occurs with current passenger excise taxes for manned aviation. Similarly, UAS service suppliers could collect fees from UAS operators on behalf of FAA for air navigation services. Decide When to Begin Collecting Fees Decisions about when to implement user fees depend on both practical and policy considerations. For example, user fees could be put in place as soon as FAA implements each UAS-related regulation, service, or system—that is, once FAA’s costs related to a given activity can be estimated and beneficiaries identified. Alternatively, policy makers may decide not to implement user fees, or to implement some fees but not others, for a period of time in order to allow the nascent UAS industry to develop and to increase commercial viability. FAA’s tasking statement for Task Group 3 noted that one option is to consider the UAS industry an “infant industry” in need of special protections, in which case FAA could need to ask Congress for additional appropriations from the general fund to support UAS-related activities in the interim. Our prior work notes that while it may advance a particular policy goal to, for example, waive fees for a nascent industry for a period of time, such provisions might create unfair competitive advantages among users or industries. In discussing what level of system development should be achieved prior to imposing fees, stakeholders we spoke to had a wide range of divergent opinions, including the following: Some fees, like the existing registration fee, can be imposed now—as users are receiving value and FAA is incurring costs—and adjusted as the industry develops. Designing fees for UAS should take place only after the infrastructure and regulatory environments have been established. FAA and other policy makers should start considering user fees and an accompanying cost accounting and allocation system as soon as possible, but implementation should wait until a UAS traffic management system has been implemented. Fees for FAA services should be implemented when commercial operations over people and beyond-visual-line-of-sight are routine (that is, when advanced, revenue-generating UAS operations are being conducted without need for a waiver). Industry Stakeholders Have Identified Options for Fee Mechanisms to Recover FAA’s Costs The Drone Advisory Committee’s Task Group 3 concluded that funding for integration efforts would be shared across government and industry and that user fee mechanisms should be considered to recover FAA’s costs related to a range of activities including rulemaking, development of policies and standards, and research and development. While the task group did not make a specific recommendation on a particular fee mechanism, its final report identified five possible fee mechanisms with the intention of providing policy makers with ideas: Filing and licensing fees: Similar to the already-implemented UAS registration fee, FAA could impose fees to recover the costs of other FAA services such as reviewing applications for waivers and certifications. Point-of-sale-tax: Legislation could be passed to impose a federal tax on UAS and ensure that the proceeds are used to offset the costs of FAA’s UAS-related activities. Business use fee or tax: A business use or transaction tax could be imposed on the purchase of a UAS-related service: Commercial businesses that use UAS on behalf of a customer or as part of their customer service could be responsible for a “pay as you go” model fee for use of the airspace, which would be added to the invoice. This concept could include, for example, fees for passengers using urban air-mobility services or fees for the transport of cargo by UAS, similar to the existing excise taxes that fund the Airport and Airway Trust Fund for manned aviation. Airspace access fee: FAA could recover some or all of the costs associated with UAS traffic management services by requiring that UAS operators filing flight plans or other requests to operate UAS pay a fee to FAA. For instance, the report proposes that operators could remit a fee online when they request access to airspace near airports using LAANC. Auction or lease of airspace: FAA could recover costs or receive revenue for use of a public resource (navigable airspace) by conducting auctions to grant a license to UAS traffic management service suppliers, similar to granting radio spectrum licenses, which have been used or proposed to address overcrowding of spectrum and have resulted in significant revenue. Stakeholders noted that there is not currently a problem with capacity of the national airspace with respect to the operation of UAS and that there is no need for auctions of airspace on the basis of scarcity. According to FAA, each of these options would generally require additional authority from Congress to enable FAA to collect and use revenue. The Task Group 3 report and most stakeholders we spoke to (many of whom participated in the Task Group) agreed that the fee mechanisms identified generally covered the range of potential options and stated that it is too early to know which fee mechanisms would be appropriate to recover the costs of any one activity. Nonetheless, stakeholders described their overall impressions of how each mechanism could work, including the following considerations: If fees are burdensome for casual users, fees could lead to non- compliance with requirements. Fees that rely on self-reporting by users might be difficult to enforce or might create a disincentive for users to operate within the system (that is, operators might find ways to operate without FAA’s knowledge to avoid paying a fee), an outcome that could decrease compliance with rules meant to increase safety. A point-of-sale tax (generally a percentage of the cost of the products) on UAS would not necessarily be in proportion to the cost of services or benefits being provided by FAA and might be complicated to implement and administer. For example, stakeholders noted that a point-of-sale tax would not apply to home-built or second-hand UAS users and the tax would not be linked to actual use of the UAS (that is, the UAS activities that might impose costs on FAA). FAA’s Lack of Planning to Consider Possible Fee Mechanisms Could Impede Future Design of UAS User Fees FAA officials told us that they have not yet identified or studied potential UAS fee mechanisms or analyzed the findings included in the Task Group 3 report because they have been waiting for the results of our work to inform their decision-making and planning. OMB instructions to agencies related to user fees establish that—to increase efficiency of resource allocation and reduce burden on taxpayers—agencies should recover costs when special benefits are delivered to specific users and that agencies must review all agency programs on a biennial basis to determine whether fees should be assessed. Similarly, federal internal control standards note that management should identify, analyze, and respond to significant change—such as increasing costs related to a change in mission like the integration of UAS to the national airspace— using a forward-looking process. Given the evolving nature of the UAS industry, it is unclear how UAS users and associated government activities and services fit into FAA’s existing funding structure. As the balance of FAA’s activities gradually shifts to include increased focus on UAS-related activities, those activities continue to be funded by a combination of manned aviation users (through revenue to the Airport and Airway Trust Fund) and taxpayers (through general revenues). The revenues to the Airport and Airway Trust Fund are from taxes on airline tickets, cargo, and fuel, but are not closely linked to the costs to FAA of providing specific services. In 2007, FAA and the administration proposed a new funding system that would rely more on cost-based fees for specific manned aviation activities. This proposal, however, was never implemented. We previously testified regarding this proposal, noting that such fees could allow FAA to better identify funding options that link revenues and costs and improve transparency by showing how much is being spent on specific FAA activities, but that achieving these goals would depend on the soundness of FAA’s cost allocation methodology and extent to which revenues are linked to costs. The provision in the FAA Reauthorization Act of 2018 for GAO to conduct this review, FAA’s tasking statement for Task Group 3, and statements made by Task Group 3 in its final report suggest an interest among Congress, FAA, and industry stakeholders, respectively, in considering user fees as an option for recovering the costs of FAA’s UAS activities. Implementation of cost-based user fees for UAS would be different from FAA’s longstanding funding structure for manned aviation, but may not necessitate a change in that existing structure for areas of FAA’s mission other than UAS. Indeed, the Task Group 3 report expresses a consensus that options for UAS funding should not be constrained by the current traditional aviation funding structure, and any recommended funding structure should not alter the current structure of funding for traditional, manned aviation. As UAS integration continues to evolve, FAA may identify ways that the current aviation funding structure can be adjusted to recover costs related to UAS operations. For instance, FAA officials noted that, once large UAS cargo and passenger operations have been established, those operations could become subject to the same excise taxes on fuel, cargo, and passengers as are manned operations. As we have discussed, fees to recover FAA’s costs for its UAS activities need not be assessed on a program-wide basis. That is, fees to recover the costs of individual UAS activities can be implemented separately either as new rules or systems are developed or as FAA reviews its activities and identifies areas in which services to UAS users are incurring costs that could be recovered. Further, fees based on costs to FAA estimated as each rule or system is developed can be periodically adjusted as needed. As explained in our User Fee Design Guide, periodic reviews of user fees can help ensure that Congress, stakeholders, and agencies have complete information about changing program costs and that fees remain aligned with program costs. As UAS integration continues, ongoing conversations between Congress, FAA, and stakeholders may provide additional insight into how fees can be implemented to accomplish goals. To date, FAA has not incorporated steps into its existing UAS planning efforts to identify potential fee mechanisms. Considering potential user fees as part of these efforts—such as FAA’s annual UAS implementation planning—could better position FAA to design effective user fees should policy makers task FAA with implementing them. For instance, collecting information on costs and beneficiaries as new UAS-related services are developed and implemented could ensure that data needed to design effective user fees are available. Similarly, considering ways to collect revenue—such as through third parties or online systems—as services and systems are being developed or adapted for UAS users, could facilitate future implementation of fees. As an example of the type of planning that may be needed, FAA officials said that identifying the costs of UAS traffic management services for the purpose of setting fees would involve (1) tracking which UAS are using the national airspace and (2) tracking and categorizing the type of operations conducted. Incorporating a means of collecting these data during the planning and development of traffic management systems would be useful to future fee-design considerations in this area. This is not to say that cost recovery considerations should drive the development of regulations or systems at the expense of mission goals. Rather, such planning would offer opportunities for FAA to examine systems, policies, and regulations that have been designed to accomplish the goals of UAS integration in order to assess (1) how each system, policy, or regulation will affect FAA’s costs; (2) the need for additional resources; and (3) potential options for collecting revenue. Conclusions FAA is tasked with managing the integration of UAS into the national airspace within the context of many competing priorities and limited resources. Without a process to ensure information on UAS-related costs is complete for either current or future efforts, neither FAA, nor the administration, or Congress have reliable information about the total costs of FAA’s UAS-related activities and therefore may lack the information needed to effectively prioritize resources. Further, this information could inform the design of effective user fees, should policy makers decide that such fees are appropriate. FAA’s UAS integration-planning efforts offer an opportunity for FAA to build the collection of relevant data, and consideration of user fee options, into ongoing activities. Recommendations for Executive Action We are making the following two recommendations to the FAA: The Administrator of the Federal Aviation Administration should develop and implement a process to ensure that information on UAS-related costs is complete and reliable as capabilities and related regulations evolve. (Recommendation 1) The Administrator of the Federal Aviation Administration, as part of UAS integration-planning efforts, should use available guidance on effective fee design to incorporate steps that will inform future fee-design considerations. For example, FAA may choose to incorporate these additional steps into its annual UAS implementation plan so that—as existing activities are adapted for UAS users or new regulations, services, or systems are introduced—costs and fee design options are considered. (Recommendation 2) Agency Comments We provided a draft of this report to the Department of Transportation (DOT) for comment. In its comments, reproduced in appendix III, DOT agreed that there are likely opportunities to better track and recover UAS- related costs and concurred with our recommendations. We will be sending copies of this report to appropriate congressional committees and the Secretary of Transportation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or KrauseH@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Stakeholders Contacted During the Course of This Review Appendix II: Federal Aviation Administration Unmanned Aircraft Systems Activities and Associated Fiscal Year 2018 Obligations Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Heather Krause, (202) 512-2834 or KrauseH@gao.gov. Staff Acknowledgments In addition to the contact named above, the following individuals made important contributions to this report: David Sausville, Assistant Director; Katie Hamer, Analyst-In-Charge; Alexandra Jeszeck; Amy Abramowitz; Camilo Flores; Richard Hung; Delwen Jones; Heather Keister; Hannah Laufe; Susan Murphy; Joshua Ormond; Pamela Snedden; and Elizabeth Wood.
Why GAO Did This Study UAS have the potential to provide significant social and economic benefits in the United States. FAA is tasked with safely integrating UAS into the national airspace. As the UAS sector grows, so do demands on FAA's staffing and other resources to develop, oversee, and enforce rules and systems needed to safely integrate UAS into the national airspace. The FAA Reauthorization Act of 2018 provides for GAO to review issues related to establishing fee mechanisms for FAA to recover its costs related to UAS. This report discusses, among other things, 1) FAA efforts to track the costs of current and planned activities related to UAS and 2) key considerations and options for designing user fee mechanisms that could recover FAA's costs. GAO reviewed FAA documents and financial data for fiscal years 2017 through 2019 and industry reports on drone integration funding. GAO interviewed a non-generalizable sample of 22 UAS industry stakeholders, selected based on participation in FAA advisory groups or prior GAO knowledge to achieve a range of perspectives. GAO reviewed its guidance on designing effective fee mechanisms and OMB instructions to agencies about implementing user fees. What GAO Found The Federal Aviation Administration (FAA) has undertaken actions to integrate unmanned aircraft systems (UAS or “drones”) into the national airspace and has developed plans to allow for increasingly complex operations, including operations over people and beyond visual-line-of-sight and—eventually—passenger operations (see figure). However, FAA efforts to track related costs may result in incomplete information. FAA established a means of tracking the costs associated with some UAS-activities in certain offices, but many, if not all, FAA offices are doing work related to both manned aviation and UAS. FAA officials stated that they do not know or plan to assess the extent to which staff who split their time between UAS-activities and other responsibilities are tracking those costs. Furthermore, FAA's future costs to conduct oversight and provide air navigation services are largely unknown due to the changing nature of the industry and its early stage of development. Ensuring that information on UAS-related costs is complete and reliable now could put FAA in a better position to identify those costs as they evolve and possibly expand in the future. The extent to which FAA should recover costs for its UAS-related activities, and what fees are appropriate, are policy decisions for the administration and Congress. Accordingly, this report does not recommend any specific fee mechanism. Nonetheless, planning and consideration of policy goals, using available guidance on user fee design, could better position FAA to inform future decision-making on these issues as it proceeds with UAS integration. Since 2015, FAA has collected a registration fee from UAS operators, but most of FAA's UAS costs are not related to registration or covered by this fee. A stakeholder group established by FAA identified potential fee mechanisms and concluded in 2018 that the aviation industry, FAA, and Congress should identify revenue streams to help fund FAA's UAS activities. Further, GAO guidance and Office of Management and Budget instructions provide a framework, including information requirements, for designing effective user fees. FAA officials said that they have not considered user fee mechanisms as part of their planning because they have been awaiting this report to inform their decision-making. By using available guidance as part of its planning, FAA could incorporate steps, such as identifying costs and beneficiaries, which would benefit future fee design considerations. What GAO Recommends GAO is recommending that FAA (1) implement a process to ensure UAS-related cost information is complete and (2) use available guidance on effective fee design to incorporate steps, as part of UAS integration planning, that will inform future fee design considerations. FAA concurred with the recommendations.
gao_GAO-20-203T
gao_GAO-20-203T_0
Background Prior to the enactment of the CFO Act, government reports found that agencies lost billions of dollars through fraud, waste, abuse, and mismanagement. These reports painted the picture of a government unable to properly manage its programs, protect its assets, or provide taxpayers with the effective and economical services they expected. Reported financial management problems included (1) unreliable financial information driven by widespread weaknesses in agency internal controls over financial reporting and obsolete and inefficient agency financial management systems and (2) financial reporting practices that did not accurately disclose the current and probable future cost of operating, permit adequate comparison of actual costs among executive branch agencies, or provide the timely information required for efficient program management. For example, in 1988, we reported on internal control problems such as the Department of Defense being unable to account for hundreds of millions of dollars in advances paid by foreign customers for equipment, weak controls permitting things such as over $50 million in undetected fraudulent insurance claims paid by the Federal Crop Insurance Corporation, millions of dollars in interest penalties because agencies paid 25 percent of their bills late, and over $350 million in lost interest because agencies paid their bills too soon. In 1990, Congress mandated financial management reform through enactment of the CFO Act. The CFO Act was the most comprehensive and far-reaching financial management improvement legislation enacted since the Budget and Accounting Procedures Act of 1950. The CFO Act established a Controller position at the government-wide level and a CFO position for each of the agencies identified in the act (referred to as the CFO Act agencies), provided for long-range planning, and began the process of preparing and independently auditing federal agency financial statements. The act aimed to strengthen internal controls, integration of agency accounting and financial management systems, financial reporting practices, and the financial management workforce. The act also called for systematic performance measurement and cost information. As figure 1 shows, a number of other financial management reforms were subsequently enacted to help improve federal financial management, some of which I will briefly discuss in my statement today. A chronological list of statutes cited in this report and selected additional financial management reforms is included in appendix II. Substantial Progress Has Been Made toward Achieving the Purposes of the CFO Act The federal government has made substantial progress toward improving financial management and achieving the purposes of the CFO Act. Table 1 highlights some of the progress that has been made. Leadership: OMB, Agency CFOs, and Treasury Have Provided Notable Financial Management Leadership The centralized leadership structures envisioned by the CFO Act—a Controller position at the government-wide level and a CFO position at each CFO Act agency—have been established. OMB’s Deputy Director for Management and Office of Federal Financial Management, headed by the Controller and Deputy Controller, have led reform efforts by developing and periodically updating guidance and initiatives in areas such as financial management systems, auditing, financial reporting, internal control, and grants management. The CFO Act also required OMB to submit to Congress, annually, a 5- year plan for improving financial management—mirrored in corresponding CFO Act agency plans. Among other things, the plan required a description of the existing financial management structure and changes needed; a strategy for developing adequate, consistent, and timely financial information; proposals for eliminating unneeded systems; identification of workforce needs and actions to ensure that those needs are met; a plan for the audit of financial statements of executive branch agencies; and an estimate of the costs for implementing the plan. The CFO Act also required annual financial management status reports government-wide and for executive branch agencies. From 1992 to 2009, OMB annually prepared comprehensive 5-year government-wide financial management plans. Agency CFOs have significantly contributed to improvements in financial management. According to the survey we issued to CFOs and deputy CFOs, some of these improvements include advising executive leadership on financial management matters and direction for agency financial operations and professional financial management personnel; taking steps to develop and maintain financial management systems; reducing duplicative financial management systems; resolving audit findings; supporting audits of the agency’s financial statements; helping to ensure the quality of financial information, and preparing the agency financial report and other financial reports. In addition, the CFO Council periodically met to advise and coordinate activities and initiatives, including those related to internal controls, financial management systems, and enterprise risk management. OMB stated that the CFO Council is also working on a workforce plan. In addition, the Department of the Treasury (Treasury) made contributions to improving federal financial management. Among other things, Treasury has developed and periodically updated government-wide guidance and tools to support federal financial reporting; issued, in coordination with OMB, the Financial Report of the U.S. Government since fiscal year 1997, which includes the government-wide consolidated financial statements; and developed a long-term vision for improving federal financial management. In 2010, Treasury established the Office of Financial Innovation and Transformation, which identifies and facilitates the implementation of innovative solutions to help agencies become more efficient and transparent, and Treasury also issues an annual message to agency CFOs to set the direction and goals of federal financial management. Financial Reporting: The Preparation and Audit of Financial Statements Have Provided Much- Needed Accountability and Transparency In 1990, OMB, Treasury, and GAO jointly established the Federal Accounting Standards Advisory Board (FASAB) to develop and promulgate accounting standards and principles for financial reporting in the federal government. In 1999, FASAB was recognized by the American Institute of Certified Public Accountants as the standard setter for generally accepted accounting principles for federal government entities. FASAB has issued 57 statements of federal financial accounting standards (SFFAS) that provide greater transparency and accountability over the federal government’s operations and financial condition, including SFFAS 36, Comprehensive Long-Term Projections for the U.S. Government, which requires the Statement of Long-Term Fiscal Projections as part of the government-wide consolidated financial statements. In addition, OMB, Treasury, and GAO have regularly provided guidance to agencies that improves transparency, consistency, and usefulness of financial reporting. Agencies have significantly improved the quality and timeliness of their financial reporting since the enactment of the CFO Act. As expanded by the Government Management Reform Act of 1994 (GMRA) and the Accountability of Tax Dollars Act of 2002 (ATDA), federal law now requires every CFO Act agency and most other executive agencies to annually prepare audited financial statements no later than March 1—5 months after the end of the federal fiscal year. However, OMB has accelerated this due date for audited financial statements. For the first time, for fiscal year 2005, all CFO Act agencies completed their audited financial statements by November 15, approximately 45 days after the close of the fiscal year, compared to the 60–90 day requirement for public companies filing with the Securities and Exchange Commission. For fiscal year 1996, the first year that all CFO Act agencies were required to prepare audited financial statements, six CFO Act agencies received an unmodified (“clean”) audit opinion on their respective entities’ financial statements, compared with 22 CFO Act agencies that received clean audit opinions for fiscal year 2018. Today, to demonstrate transparency and accountability to Congress and citizens, the CFO Act agencies make their annual performance reports and annual financial reports, which include audited financial statements, available on their websites. In addition, since fiscal year 1997, Treasury, in coordination with OMB, has annually prepared government-wide consolidated financial statements, which are available on Treasury’s website. Substantial benefits have been achieved as a result of the preparation and audit of financial statements, which provide useful and necessary insight into government operations, including federal agency accountability to Congress and citizens, including independent assurance about the reliability of reported financial information; greater confidence to stakeholders (governance officials, taxpayers, consumers, or regulated entities) that federal funds are being properly accounted for and assets are properly safeguarded; an assessment of the reliability and effectiveness of systems and related internal controls, including identifying control deficiencies that could lead to fraud, waste, and abuse; a focus on information security; early warnings of emerging financial management issues; and identification of noncompliance with laws and regulations, which can present challenges to agency operations. Our CFO survey respondents (18 of 23) agreed that preparation and audit of financial statements are greatly or moderately beneficial to federal agencies, noting that the financial audit process helped identify and eliminate material weaknesses in internal control, greatly strengthened internal control processes, and led to more discipline and integrity in federal accounting. Continuation of annual agency financial statement audits is critical to maintaining accountability and sustaining financial management improvements. Also, independent assurance that financial management information included in agency financial statements is fairly stated is an important element of accountability and provides agency management, OMB, Treasury, Congress, and citizens with assurances that the information is reliable and properly accounted for. Internal Control: Significant Improvements Have Been Made A key goal of the CFO Act was to improve internal control to reasonably assure that the federal government’s financial management information is reliable, useful, and timely. Compared with 1990, internal control is markedly stronger. The number of material weaknesses in internal control over financial reporting—significant issues that create the potential for inaccurate financial information that would change or influence the judgment of a reasonable financial report user relying on the information—reported as part of financial statement audits has been significantly reduced. For fiscal year 2005, financial statement auditors reported no identified material weaknesses for only seven of 24 CFO Act agencies, based on their financial statement audits; by 2018, that number had doubled to 14. In addition, auditors identified and agencies fixed thousands of internal control problems over the past 3 decades. Further, Treasury and OMB have addressed many of the internal control problems related to the processes used to prepare the U.S. government’s consolidated financial statements. However, some internal control problems are long-standing, complex, and not quickly resolved, such as accounting for transactions between federal agencies. Annual financial statement audits also uncovered the significance of improper payments and prompted legislation to strengthen controls over improper payments. Agencies have made progress in estimating the amount of improper payments and implementing efforts to reduce them, but this remains an area of concern. We have reported improper payments as a material deficiency or weakness since the fiscal year 1997 initial audit of the U.S. government’s consolidated financial statements. For fiscal year 2018, 79 programs across 20 agencies reported estimated improper payments totaling about $151 billion. Since fiscal year 2003— when certain agencies were required to begin reporting estimated improper payments—cumulative improper payment estimates have totaled about $1.5 trillion. The annual financial statement audits, which include an assessment of information systems controls, surfaced widespread information security weaknesses. Since fiscal year 1997, we have reported information security as a material weakness in the audit of the U.S. government’s consolidated financial statements. We have also reported information security as a government-wide high-risk area since 1997. To address information security challenges surfaced by federal agency audits, Congress enacted the Federal Information Security Management Act of 2002 and its successor, the Federal Information Security Modernization Act of 2014. These laws require agencies to develop, document, and implement programs to provide security for the information and information systems that support agency operations and assets. Financial Management Systems: Steps Have Been Taken to Improve the Government’s Systems One key purpose of the CFO Act and of the Federal Financial Management Improvement Act of 1996 (FFMIA) that followed was to improve federal agencies’ financial management systems. FFMIA requires CFO Act agencies to maintain financial management systems that substantially comply with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger at the transaction level. Agencies have improved their compliance with FFMIA requirements. For fiscal year 2018, auditors reported that 16 of 24 CFO Act agencies’ financial systems substantially comply with FFMIA’s systems requirements for fiscal year 2018, up from four agencies in fiscal year 1997. Federal agencies have taken steps to implement new financial systems. While progress has been made in modernizing financial management systems, we have previously reported that efforts to modernize financial management systems have often exceeded budgeted cost, resulted in delays in delivery dates, and did not provide the anticipated system functionality and performance. For example, one-half (12 of 24) of the CFOs and deputy CFOs who responded to our survey indicated that they still use old systems and use obsolete software or hardware to perform financial management responsibilities. Some agencies have used migration of financial systems to external providers as part of their system modernization efforts, but others have experienced challenges in using shared services. For example, some CFO Act agencies have had difficulty in finding a provider with sufficient capacity and decided to modernize their financial system internally. Others that have attempted to move their financial system to a shared service provider failed to meet their cost, schedule, and performance goals. The federal government also has taken action aimed at reducing duplicative efforts by increasing agencies’ use of shared services for commonly used computer applications—such as payroll or travel. Over the past 15 years, there have been some notable shared services successes. For example, consolidating payroll services resulted in more than $1 billion in cost savings and cost avoidance over 10 years, according to Office of Personnel Management (OPM) estimates. In April 2019, OMB issued Memorandum M-19-16 on shared services, which among other things described the process and desired outcomes for shared services and established a governance and accountability model for achieving them. Workforce: Steps Have Been Taken to Strengthen the Federal Financial Management Workforce To help achieve the CFO Act’s purposes, the federal government established a financial management workforce structure, improving the quality of the federal workforce. Since then, steps have been taken to strengthen the federal financial management workforce, including the following: In 2000, the CFO Council and OPM worked together to align qualifications standards for accounting, auditing, and budget competencies with emerging financial management position requirements. In 2002, Congress and the President enacted legislation to empower OPM to provide agencies with additional authorities and flexibilities to manage the federal workforce and created the chief human capital officer (CHCO) positions and the CHCO Council to advise and assist agency leaders in their human capital efforts. In 2011, OPM and the CHCO Council created a working group that identified critical skills gaps in six government-wide, mission-critical occupations, including that of auditor. In 2017, OPM published a regulation requiring each CFO Act agency to develop a human capital operating plan describing agency-specific skills and competency gaps that are selected for closure and the strategies that will be implemented. Preliminary Observations on Opportunities for Enhancements to Fulfill the Purposes of the CFO Act While substantial progress has been made, additional attention is needed in several areas to help fully achieve the vision of the CFO Act and, in doing so, improve and modernize federal financial management. Based on the preliminary results from our ongoing review, we have identified several opportunities for enhancements that could help ensure that the CFO Act reaches its full potential. 1. To help ensure uniform responsibility, enhance strategic decision- making, and correct inconsistencies across government, amend agency CFO’s statutory responsibilities to ensure that they include all of the responsibilities necessary to effectively carry out financial management activities. Currently, responsibilities vary across agencies and do not include all key responsibilities that CFOs should possess. 2. To help ensure continuity in agency financial management operations when CFO vacancies occur, establish appropriate statutory responsibilities for deputy CFOs. This would minimize the effects of inevitable turnover in CFO positions. 3. Based on the maturity of federal financial management, extend the reporting frequency of the government-wide and agency-level financial management plans from annually to at least every 4 years (with timing to match the Government Performance and Results Act reporting requirements). In addition to the current government-wide financial management plan requirements, the plans should include actions for improving financial management systems, strengthening the federal financial management workforce, and better linking performance and cost information for decision-making. The government-wide plan should also include key selected financial management performance-based metrics. It is our view that OMB and Treasury should consult with the CFO Council, the Chief Information Officer Council, the Council of the Inspectors General on Integrity and Efficiency, GAO, and other appropriate financial management experts in preparing the government-wide plan. 4. To provide more complete and consistent measurement of the quality of agencies’ financial management, require OMB to develop, in consultation with the CFO Council, key selected performance-based metrics to assess the quality of an agency’s financial management, and changes therein. Examples of potential metrics include the number of internal control deficiencies, the number of internal control deficiencies corrected during the year, and the number of Antideficiency Act violations.The metrics should be included in the government-wide and agency-level financial management plans discussed above and agencies’ performance against the metrics reported in the annual status reports. Also, consider requiring auditor testing and reporting on the reliability of each agency’s reported performance against the metrics. 5. To reasonably assure that key financial management information that an agency uses is reliable, require agency management to (1) identify key financial management information, in addition to financial statements, needed for effective financial management and decision- making and (2) annually assess and report on the effectiveness of internal control over financial reporting and other key financial management information. Also, consider requiring auditor testing and reporting on internal control over financial reporting and other key financial management information. We provided a draft of the progress and opportunities for enhancements to OMB, Treasury, and OPM. OPM provided technical comments. OMB and Treasury generally agreed with enhancements 1 and 2, regarding CFOs’ and deputy CFOs’ statutory responsibilities. OMB generally disagreed with enhancement 3, regarding preparation of government- wide and agency-level financial management plans, stating that developing government-wide plans poses an administrative burden and is no longer relevant in light of the current state of financial management. However, we believe that a complete and integrated government-wide plan could help to ensure continuity in direction and a comprehensive understanding of the status and financial management challenges across government. Eight of the 10 financial experts we interviewed stated that without a government-wide financial management plan, the government lacks a clear strategic direction and agency improvement efforts may not appropriately address government-wide priorities. For enhancement 4, regarding performance metrics for agencies’ financial management, OMB generally disagreed, stating that it would be difficult to develop additional metrics that would apply to all agencies. We recognize the challenges in developing the metrics but continue to believe that a limited number of key metrics can be developed to effectively assess the quality of agencies’ financial management. For enhancement 5, regarding identifying key financial management information and assessing, reporting, and auditing internal control, Treasury generally agreed and OMB generally disagreed, noting that no action is needed and these controls are adequately addressed under existing initiatives and the enterprise risk management program contained in OMB guidance. We believe that a separate assessment is needed to reasonably assure that key agency financial management information used by the agency is reliable. Chairman Enzi, Ranking Member Sanders, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Dawn B. Simpson, Director, Financial Management and Assurance, at (202) 512-3406 or simpsondb@gao.gov or Robert F. Dacey, Chief Accountant, at (202) 512-3406 or daceyr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Phyllis Anderson (Assistant Director), LaDonna Towler (Assistant Director), Beryl Davis (Director), David Ballard, Jeremy Choi, Anthony Clark, Patrick Frey, Ryan Guthrie, Isabella Hur, Jason Kelly, Jason Kirwan, Chris Klemmer, Michael LaForge, Jill Lacey, Diana Lee, Christy Ley, Keegan Maguigan, Lisa Motley, Heena Patel, Matthew Valenta, Walter Vance, and William Ye. Appendix I: Objectives, Scope, and Methodology This testimony highlights some of the most significant achievements in federal government financial management since enactment of the Chief Financial Officers Act of 1990 (CFO Act) and some preliminary observations on how federal financial management can be enhanced. The information in this testimony is based on our ongoing review and analysis of relevant legislation; federal financial management guidance, such as Office of Management and Budget (OMB) circulars; reports on financial management issued by the Government Accountability Office (GAO), agency offices of inspector general, and others; summarization of interviews and a panel discussion with experts in federal financial management; and summarization of results of GAO surveys to federal chief financial officers (CFO), inspectors general (IG), and independent public accountants (IPA). To obtain perspectives of agency personnel on federal financial management, we developed and administered two web-based surveys from May 22, 2019, through August 5, 2019. We administered one survey to 47 individuals from the CFO offices of the CFO Act agencies and included individuals holding the position of CFO, acting CFO, deputy CFO, or equivalent at these agencies as of May 1, 2019. Of the 47 individuals we surveyed, 24 individuals responded, which resulted in a 51 percent response rate. We administered the other survey to 53 individuals holding the position of IG, deputy IG, or counsel to the IG at the CFO Act agencies as of May 1, 2019, and an additional 24 IPAs who have performed financial statement audits for these agencies since fiscal year 2014. Of the 77 individuals we surveyed, 29 individuals responded, which resulted in a 38 percent response rate. Results of both surveys only represent the views of those individuals who responded to the surveys and may not be representative of all individuals from the CFO offices, IG offices, or IPA offices of the CFO Act agencies. In May 2019, we hosted an expert meeting with the theme “CFO Act - Progress and Challenges.” When planning the meeting, we considered experts with a broad array of expertise. We had a total of eight experts participate, representing both the federal and private sectors. They included individuals who had served in auditing capacities and individuals who had represented federal entities being audited. Some experts were currently serving in their roles, and others had retired. Including experts with both present and past experiences helped to ensure an examination and discussion of the history of the CFO Act from its inception to the present. Topics for discussion included progress and challenges since enactment of the CFO Act, the role of the Department of the Treasury (Treasury) and OMB with regard to the act, and suggestions for improvements to financial management processes and systems. The meeting transcript was categorized by key points, including progress, challenges, OMB’s and Treasury’s roles, government-wide plans, financial management systems, shared services, leading practices, and proposed reforms or suggestions for improvements. Appendix II: Selected Statutes Governing Federal Entity Financial Management and Reporting, Including Related Systems and Personnel Budget and Accounting Procedures Act of 1950, ch. 946 §§ 110-118, 64 Stat. 834 (Sept. 12, 1950). Federal Managers’ Financial Integrity Act of 1982, Pub. L. No. 97-255, 96 Stat. 814 (Sept. 8, 1982), codified at 31 U.S.C. § 3512(c), (d). Chief Financial Officers Act of 1990, Pub. L. No. 101-576, 104 Stat. 2838 (Nov. 15, 1990). Government Performance and Results Act of 1993, Pub. L. No. 103-62, 107 Stat. 287 (Aug. 3, 1993). Government Management Reform Act of 1994, Pub. L. No. 103-356, title IV, § 405, 108 Stat. 3410, 3415 (Oct. 13, 1994). Clinger-Cohen Act of 1996, Pub. L. No. 104-106, div. D & E, 110 Stat. 642 (Feb. 10, 1996), codified as amended at 40 U.S.C. § 11101, et seq. Federal Financial Management Improvement Act of 1996, Pub. L. No. 104-208, div. A, § 101(f), title VIII, 110 Stat. 3009-389 (Sept. 30, 1996), codified at 31 U.S.C. § 3512 note. Reports Consolidation Act of 2000, Pub. L. No. 106-531, 114 Stat. 2537 (Nov. 22, 2000), codified as amended at 31 U.S.C. § 3516. Accountability of Tax Dollars Act of 2002, Pub. L. No. 107-289, 116 Stat. 2049 (Nov. 7, 2002). Chief Human Capital Officers Act of 2002, Pub. L. No. 107-296, title XIII, subtitle A, 116 Stat. 2135, 2287 (Nov. 25, 2002). Improper Payments Information Act of 2002, Pub. L. No. 107-300, 116 Stat. 2350 (Nov. 26, 2002), codified as amended at 31 U.S.C. § 3321 note. Federal Information Security Management Act of 2002, Pub. L. No. 107- 347, title III, 116 Stat. 2899, 2946 (Dec. 17, 2002), codified as amended at 44 U.S.C. §§ 3551-3558. Department of Homeland Security Financial Accountability Act, Pub. L. No. 108-330, 118 Stat. 1275 (Oct. 16, 2004). Federal Funding Accountability and Transparency Act of 2006, Pub. L. No. 109-282, 120 Stat. 1186 (Sept. 26, 2006), codified as amended at 31 U.S.C. § 6101 note. Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111-204, 124 Stat. 2224 (July 22, 2010), codified as amended at 31 U.S.C. § 3321 note. GPRA Modernization Act of 2010, Pub. L. No. 111-352, 124 Stat. 3866 (Jan. 4, 2011). Improper Payments Elimination and Recovery Improvement Act of 2012, Pub. L. No. 112-248, 126 Stat. 2390 (Jan. 10, 2013), codified as amended at 31 U.S.C. § 3321 note. Digital Accountability and Transparency Act of 2014, Pub. L. No. 113-101, 128 Stat. 1146 (May 9, 2014), codified at 31 U.S.C. § 6101 note. Federal Information Security Modernization Act of 2014, Pub. L. No. 113- 283, (Dec. 18, 2014), codified at 44 U.S.C. §§ 3551-3558. Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015, Pub. L. No. 113-291, div. A, title VIII, subtitle D, 128 Stat. 3292, 3438-3450 (Dec. 19, 2014) (commonly referred to as the Federal Information Technology Acquisition Reform Act). Federal Improper Payments Coordination Act of 2015, Pub. L. No. 114- 109, 129 Stat. 2225 (Dec. 18, 2015). Fraud Reduction and Data Analytics Act of 2015, Pub. L. No. 114-186, 130 Stat. 546 (June 30, 2016). National Defense Authorization Act for Fiscal Year 2018, Pub. L. No. 115- 91, div. A, title X, subtitle G, 131 Stat. 1283, 1586 (Dec. 12, 2017), codified at 40 U.S.C. § 11301 note (commonly referred to as the Modernizing Government Technology Act). Foundations for Evidence-Based Policymaking Act of 2018, Pub. L. No. 115-435, 132 Stat. 5529 (Jan. 14, 2019). Appendix III: Opportunities for Enhancements to Fulfill the Purposes of the CFO Act Standardize CFO and Deputy CFO Responsibilities across Government The CFO Act provided agency CFOs with broad responsibilities for all financial management activities of their respective agencies, including financial management systems (including financial reporting and internal controls); agency financial management personnel, activities, and operations; preparation of financial statements; and monitoring of budget execution. The specific responsibilities assigned to CFOs vary among agencies and are inconsistent government-wide. We previously reported that CFO Act agencies need to ensure that CFOs possess the necessary authorities within their agencies to achieve change. For instance, because of the interdependency of the budget and accounting functions, some agencies have included both budget formulation and execution functions under the CFO’s authority while others have not. Most financial experts we interviewed agreed and the CFO Council and the Council of the Inspectors General on Integrity and Efficiency (CIGIE) reported that to allow for better strategic decision-making, CFO responsibilities should include budget formulation and execution, planning and performance, risk management and internal controls, financial systems, and accounting. Most experts agreed that standardizing the CFO portfolio across agencies would promote standardized financial management training and education and consistent skill sets across agencies, both at the executive and staff levels. The CFO Council and CIGIE have identified turnover of agency CFOs, even during the same administration, as a significant challenge. They also stated that major financial management improvement initiatives can take years to fully implement and realize, often outlasting the average tenure of a political appointee to a CFO position. With frequent CFO turnover and potentially lengthy intervals between official appointments, long-term planning and leadership continuity can be affected because career deputy CFOs, who frequently serve as acting CFOs during CFO vacancies, do not always have the same breadth of responsibilities as CFOs. Deputy CFOs can be better prepared to act for CFOs when there are vacancies if appropriate responsibilities are established for deputy CFOs. In our survey to CFOs and deputy CFOs, 17 of 24 respondents stated that the deputy CFO position should include all, most, or many of the same responsibilities as the CFO position. Additionally, some respondents to our survey replied that it is important for the deputy CFO to be able to step into the CFO position should there be a vacancy. CIGIE also said that deputy CFOs should be sufficiently empowered with more standard responsibilities to ensure effective succession planning. Prepare Government- Wide and Agency-Level Financial Management Plans The CFO Act called for annual comprehensive government-wide 5-year plans for improving federal financial management. It also called for each agency CFO to annually prepare a plan to implement the government- wide plan prepared by the Office of Management and Budget (OMB). Moreover, it required annual government-wide and agency-level status reports. The OMB plans and status reports were to be submitted to Congress to enable comprehensive congressional oversight. Since it issued the 2009 report, OMB has neither prepared nor submitted to Congress the annual 5-year government-wide plans as required by the CFO Act. Instead, OMB stated that it is meeting the intent of the requirement by providing information in the President’s Management Agenda (PMA), in the annual government-wide consolidated financial statements, and in documents placed on Performance.gov and the CFO Council’s website. For the consolidated financial statements, the information is included in a section in the Management’s Discussion and Analysis (MD&A) entitled Financial Management. This section discusses several of the priorities and accomplishments in financial management for the prior and current fiscal years and in some cases discusses goals for the next fiscal year. In addition, according to OMB, financial management elements are being considered in implementing the 2018 PMA. The CFO Council, in coordination with OMB, has identified six financial management cross- agency priorities and is developing detailed plans for each. Two of these plans, results-oriented accountability for grants and getting payments right, have been completed and posted on Performance.gov. The others are being managed by executive steering committees comprising CFO Council–approved members. While the various MD&A Financial Management sections, the PMA, and other OMB documents contain relevant information about improvements in financial management, these documents do not provide a complete and integrated financial management strategy for making continued improvements and for reporting on the administration’s accomplishments in a comprehensive manner. In 2019, OMB proposed eliminating the CFO Act requirement for a separate comprehensive plan, arguing that this change would provide it with flexibility to report information that is most relevant to financial management in a manner that is most efficient. However, having a complete and integrated financial management plan would help to address long-standing, costly, and challenging concerns in financial management in a strategic, comprehensive, efficient, and cost-effective manner. Eight of the 10 financial experts we interviewed stated that without a government-wide financial management plan, the government lacks a clear strategic direction and agency improvement efforts may not appropriately address government-wide priorities. To hold people accountable and facilitate congressional oversight, a complete and integrated financial management plan should include the resources required and measure progress through interim milestones with completion dates. Several experts also stated that they believe that a government-wide plan should be done every few years instead of annually, but that the status report could continue to be prepared annually. A complete and integrated government-wide financial management plan and supporting agency plans, prepared every few years, could help ensure continuity in direction and a more comprehensive understanding of gauging progress toward addressing financial management challenges across government. Better Link Performance and Cost Information for Decision-making The CFO Act calls for agencies to (1) develop and maintain integrated accounting and financial management systems that provide for, among other things, systematic measurement of performance and (2) develop and report cost information. While the Government Performance and Results Act of 1993 (GPRA) laid a foundation for results-oriented management, we found that agencies’ reported use of performance data to make decisions has generally not improved. While agencies have made efforts in this direction, opportunity exists to enhance the availability and reliability of performance and cost information, and better link this information for decision-making. One example of this is linking program performance to program cost. A number of agencies have implemented activity-based costing, which creates a cost model of an organization by identifying the activities performed, the resources consumed, and the outputs (products and services) that an organization produces. However, linking cost and performance information for effective decision-making has been challenging. Respondents to our CFO survey noted that agencies face challenges in (1) developing and maintaining an integrated agency accounting and financial management system (19 of 24 respondents), (2) developing and reporting cost information (19 of 24 respondents), and (3) having financial management systems that produce the needed financial data to help address agency performance goals (21 of 24 respondents). Agencies that lack readily available, reliable, and linked performance and cost information may not be able to effectively make financial management decisions that are based on dollars allocated and results achieved and thus may miss opportunities to reduce costs or enhance mission effectiveness. Develop a Broader Set of Key Selected Financial Management Performance-Based Metrics Agencies have limited financial management performance-based metrics (e.g., financial statement audit opinion and number of reported material weaknesses in internal control over financial reporting) to help them assess the quality of their financial management. A broader set of key selected financial management performance-based metrics can provide more complete analysis across the breadth of financial management functions. Examples of potential metrics include the number of internal control deficiencies, the number of internal control deficiencies corrected during the year, and the number of Antideficiency Act violations. Key selected financial management performance-based metrics, including identifying metrics in the government-wide and agency-level plans discussed above and reporting of agency performance against the metrics in the annual status reports, can help ensure that the federal government better manages and uses the resources entrusted to it. Also, auditor testing and reporting on each agency’s reported performance against the metrics can provide assurance that such information is reliable. Rectify Internal Control Issues in Certain Areas The CFO Act required CFOs to develop and maintain an integrated agency accounting and financial management system that provides for complete, reliable, consistent, and timely information prepared on a uniform basis and that responds to agency management’s financial information needs. To ensure the reliability of financial information, agencies need effective internal controls. While agencies have made important progress in strengthening internal control, as noted earlier, the federal government faces many internal control problems. The following discusses three areas: assessing internal control over key financial management information, government-wide improper payments, and material weaknesses preventing an opinion on the U.S. government’s consolidated financial statements. Assessing Internal Control over Key Financial Management Information Management may not have reasonable assurance that internal control over financial reporting and other key financial management information that the agency uses is reliable. Since fiscal year 1997, agency auditors’ assessments of the effectiveness of internal control over financial reporting have identified long-standing, as well as new, material weaknesses. As a result of new material weaknesses, a number of agencies have not been able to sustain “clean” audit opinions on their financial statements. In addition, continuing material weaknesses have hindered two CFO Act agencies, the Departments of Defense and Housing and Urban Development, and the government as a whole, from achieving clean audit opinions. For fiscal year 2018, auditors of CFO Act agencies reported a total of 41 material weaknesses. One key to strengthening internal control over financial reporting at federal entities has been OMB Circular No. A-123, which carries out OMB’s responsibility to provide guidelines for agencies to follow in evaluating their systems of internal control. In December 2004, OMB issued A-123, Appendix A, Internal Controls over Financial Reporting, which provided a methodology with which agency management could assess, document, and report on internal control over financial reporting. It emphasized management’s responsibility for establishing and maintaining effective internal control over financial reporting. Appendix A required CFO Act agency management to annually assess the adequacy of internal control over financial reporting, provide a report on identified material weaknesses and corrective actions, and provide separate assurance on the effectiveness of the agency’s internal control over financial reporting. The CFO Council subsequently issued the Implementation Guide for Appendix A in 2005. In 2018, OMB reported that since the issuance of OMB Circular No. A- 123’s Appendix A, federal agencies have made substantial progress in improving their internal controls over financial reporting. OMB referred to this as a rigorous process for agencies to separately assess internal control over financial reporting. Beginning in fiscal year 2018, however, OMB no longer requires such a process. On June 6, 2018, OMB issued an updated Appendix A, Management of Reporting and Data Integrity Risk. The revised Appendix A integrates internal control over reporting, along with internal controls over operations and compliance, in an overall assessment of the agency’s internal control. This reporting guidance includes internal control over financial reporting as well as over other financial and nonfinancial information. It also requires that agencies develop and maintain a data quality plan that considers the risks to data quality in federal spending data required by the Digital Accountability and Transparency Act of 2014 (DATA Act) and any controls that would manage such risks in accordance with OMB Circular No. A-123. Further, agency senior accountable officials are required to certify each quarter, among other things, that their data submissions under the DATA Act are valid and reliable. However, the appendix does not require a separate management assessment of internal controls over the reliability of federal spending data. As we previously reported, there are significant data quality problems related to the completeness and accuracy of DATA Act data. In addition, the Federal Financial Management Improvement Act of 1996 (FFMIA) requires CFO Act agencies and their auditors to determine whether agency financial management systems comply substantially with federal financial management systems requirements. However, such systems requirements are focused on preparing agency financial statements and do not generally include system requirements related to other key financial management information (e.g., performance information and cost information) needed for management decision- making. We have expressed concerns about the adequacy of financial management systems requirements contained in the Treasury Financial Manual. In our survey of CFOs and deputy CFOs, most (20 of 24) respondents said that ensuring data quality of financial information was somewhat, very, or extremely challenging. Without (1) identifying all key financial management information needed for effective financial management and decision-making, (2) separately assessing and reporting on the effectiveness of internal control over financial reporting and other key financial management information, and (3) independently assessing such controls, management may lack reasonable assurance of the reliability of such information. Government-Wide Improper Payments Improper payments have consistently been a government-wide issue, despite efforts to reduce them. Since fiscal year 2003, cumulative improper payment estimates have totaled about $1.5 trillion. Although agencies have made progress identifying and reducing improper payments, more work needs to be done to address this government-wide material weakness in internal control. We continue to report, as a government-wide material weakness in internal control, that the federal government is unable to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce them. OMB stopped reporting a government-wide improper payment estimate in fiscal year 2017. According to OMB, it stopped reporting a government-wide estimate because program-by-program improper payment data were more useful. However, we believe that the aggregation of improper payment estimates is essential for transparency as without such the extent and magnitude of the government-wide improper payments is not readily available to key decision makers. As such, we support a key provision in the Payment Integrity Information Act of 2019—a bill which has passed the Senate— to require OMB to report a government-wide improper payment estimate amount. Implementing this provision would be a positive step in determining the overall progress the federal government is making in the improper payment area. The federal government also needs to reasonably assure that agencies take appropriate actions to reduce improper payments. For example, in supplemental appropriations acts providing disaster relief funds in 2017 and 2018, Congress mandated an oversight framework for these funds by requiring federal agencies to submit internal control plans to Congress, based on OMB guidance. However, in June 2019, we reported that OMB lacked a strategy for ensuring that federal agencies provide sufficient, useful plans in a timely manner for oversight of disaster relief funds. As a result, we found that selected agencies did not submit their disaster aid internal control plans timely. The plans also lacked necessary information, such as how the selected agencies plan to meet OMB guidance and federal internal control standards. Such a strategy could help provide Congress some assurance that agencies will establish effective and efficient controls over disaster aid. The federal government also needs to reasonably assure that states, local governments, and nonprofit organizations take appropriate actions to reduce their improper payments of federal funds. For example, OMB recently revised its compliance supplement for Medicaid to enable auditors, as part of the single audit of all federal financial assistance that a state received or administered, to test beneficiaries for eligibility for the program. If this expansion of the compliance supplement is successful for Medicaid, other federal programs that states, local governments, and nonprofit organizations administer may also benefit from such revisions. Material Weaknesses Preventing an Opinion on the U.S. Government’s Consolidated Financial Statements Since the federal government began preparing consolidated financial statements over 20 years ago, three major impediments have continued to prevent us from rendering an opinion on the federal government’s accrual-based consolidated financial statements over this period. 1. Serious financial management problems at the Department of Defense (DOD) have prevented its financial statements from being auditable. DOD’s strategy for achieving a clean opinion on its financial statements and improving overall financial management has shifted from preparing for audit readiness to undergoing financial statement audits and remediating audit findings. In a positive development, DOD underwent an audit of its entity-wide fiscal year 2018 financial statements, which resulted in a disclaimer of opinion issued by the DOD Office of Inspector General (OIG). The DOD OIG also reported 20 material weaknesses in internal control over financial reporting, contributing to its disclaimer of opinion. DOD has acknowledged that achieving a clean audit opinion will take time. However, it stated that over the next several years, the resolution of audit findings will serve as an objective measure of progress toward that goal. DOD will need to develop and effectively monitor corrective action plans to appropriately address audit findings in a timely manner. Partially in response to our recommendations, DOD recently developed a centralized database for tracking the audit findings, recommendations, and related corrective action plans. 2. While significant progress has been made over the past few years, the federal government continues to be unable to adequately account for intragovernmental activity and balances between federal entities. Federal entities are responsible for properly accounting for and reporting their intragovernmental activity and balances in their entity financial statements. When preparing the consolidated financial statements, intragovernmental activity and balances between federal entities should be in agreement and must be subtracted out, or eliminated, from the financial statements. OMB and the Department of the Treasury (Treasury) have issued guidance directing component entities to reconcile intragovernmental activity and balances with their trading partners and resolve identified differences. In addition, the guidance directs the CFOs of significant component entities to report to Treasury, their respective inspectors general, and GAO on the extent and results of intragovernmental activity and balance reconciliation efforts as of the end of the fiscal year. 3. The federal government has an ineffective process for preparing the consolidated financial statements. Treasury, in coordination with OMB, has implemented several corrective actions during the past few years related to preparing the consolidated financial statements. Corrective actions included improving systems used for compiling the consolidated financial statements, enhancing guidance for collecting data from component entities, and implementing procedures to address certain internal control deficiencies. However, the federal government’s systems, controls, and procedures were not adequate to reasonably assure that the consolidated financial statements are consistent with the underlying audited entity financial statements, properly balanced, and in accordance with U.S. generally accepted accounting principles. Further, significant uncertainties, primarily related to achieving projected reductions in Medicare cost growth, and a material weakness in internal control prevented us from expressing an opinion on the sustainability financial statements. We, in connection with our audits, and agency auditors, in connection with their audits, have identified numerous deficiencies underlying the above weaknesses and have provided recommendations for corrective action. Improve Financial Management Systems The federal government has made unsuccessful efforts to implement new financial management systems, most notably at DOD, the Internal Revenue Service, the Department of Homeland Security, and the Department of Housing and Urban Development—which have spent billions of dollars on failed systems. We have reported that the executive branch has undertaken numerous initiatives to better manage the more than $90 billion that the federal government annually invests in information technology (IT). However, we reported that federal IT investments too frequently fail or incur cost overruns and schedule slippages, while contributing little to mission-related outcomes. These investments often suffered from a lack of disciplined and effective management, including inadequate project planning, clearly defined requirements, and program oversight and governance. In 2015, we added the government’s management of IT acquisitions and operations to our High-Risk List, where it remains in 2019. In fiscal year 2018, eight of 24 CFO Act agencies’ financial management systems still did not substantially comply with FFMIA’s systems requirements. Moreover, a number of agencies rely on critical legacy systems that use outdated languages, have unsupported hardware and software, and are operating with known security vulnerabilities. We previously reported that some agencies have not established complete modernization plans and face an increased risk of cost overruns, schedule delays, and project failure. In addition, most respondents to our CFO survey (15 of 24) stated that it has been extremely, very, or somewhat challenging to work with financial management systems that are old and use obsolete software or hardware. Efforts to promote greater use of shared services in certain areas, such as human resources and financial management activities, resulted in some cost savings and efficiency gains, but challenges (e.g., implementation weaknesses, project scheduling, and project management and costs) impede widespread adoption. Almost all respondents to our CFO survey (22 of 24) indicated that they currently use or plan to use shared services. Most of those respondents (16 of 24) believed that use of shared services could help reduce costs. As noted above, in April 2019, OMB issued Memorandum M-19-16 on shared services, which among other things described the process and desired outcomes for shared services and established a governance and accountability model for achieving them. Also, OMB stated that, building off of OMB’s and Treasury’s efforts to create a Quality Service Management Office for Financial Management, they are establishing a more centralized approach to standardize, consolidate, and automate agency financial systems. A government-wide plan for improving federal financial management systems, including shared services, that is incorporated into the government-wide and agency-level plans discussed above could help ensure, among other things, that financial management system problems are addressed. Strengthen the Federal Financial Management Workforce Insufficient numbers of staff, inadequate workforce planning, and a lack of training in critical areas create gaps between what the federal government needs and the skills federal employees have. We have made a number of recommendations toward achieving a federal workforce with the necessary skills, including in financial management. In a 2007 testimony, we reported that one key challenge to strong federal financial management is building a financial management workforce for the future. This holds true today. Our CFO survey respondents (14 of 24) noted that CFO Act agencies do not have all of the staff with the professional qualifications, capabilities, and expertise needed to effectively support financial management operations and practices. With rapid changes, such as emerging technologies and growing availability of data, it is critical for the government to identify and strategically plan for the future workforce to achieve effective financial management. A comprehensive, long-term plan to address the challenges in the federal financial management workforce that is incorporated into the government-wide and agency-level plans discussed above could help ensure that agencies are held accountable for a long-term vision of attracting and retaining a workforce that maintains the professional qualifications, capabilities, and expertise that will meet current and future needs. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Prior to the enactment of the CFO Act, government reports found that agencies lost billions of dollars through fraud, waste, abuse, and mismanagement. These reports painted the picture of a government unable to properly manage its programs, protect its assets, or provide taxpayers with the effective and economical services they expected. The CFO Act was enacted to address these problems—calling for comprehensive federal financial management reform. Among other things, the act established CFO positions, provided for long-range planning, and began the process of auditing federal agency financial statements. The act also called for integrating accounting and financial management systems and systematic performance measurement and cost information. This statement is based on preliminary observations from GAO's ongoing review of the federal government's efforts to meet the requirements of the CFO Act. GAO reviewed federal financial management legislation, guidance, and reports. GAO also conducted interviews and a panel discussion with experts in federal financial management, and surveyed federal CFOs, inspectors general, and independent public accountants. What GAO Found The federal government has made significant strides in improving financial management since enactment of the Chief Financial Officers Act of 1990 (CFO Act). Substantial progress has occurred in areas such as improved internal controls, reliable agency financial statements, and establishment of chief financial officer (CFO) positions. To help ensure that the CFO Act achieves its full potential, there are several opportunities for enhancement. Standardize CFO and deputy CFO responsibilities across government. The responsibilities assigned to CFOs vary among agencies. Uniform and effective responsibilities of CFOs would help enhance strategic decision-making and correct inconsistencies across government. In addition, deputy CFOs should have appropriate responsibilities in order to be better prepared to act for CFOs when there are vacancies. Prepare government-wide and agency-level financial management plans. Since 2009, the Office of Management and Budget (OMB) has not prepared the annual 5-year government-wide plans that the CFO Act requires. Instead, OMB has provided information in the President's Management Agenda, the U.S. government's consolidated financial statements, and other documents. A complete and integrated government-wide financial management plan and supporting agency plans, prepared every few years, could help ensure continuity in direction and a more comprehensive understanding of gauging progress toward addressing financial management challenges across government. Better link performance and cost information for decision-making. While agencies have made efforts in this direction, opportunities exist for agencies to better link performance and cost information to effectively make financial management decisions that are based on dollars allocated and results achieved. Develop a broader set of key selected financial management performance-based metrics. Agencies currently have limited performance-based metrics to help them assess the quality of financial management and ensure that the federal government better manages and uses the resources entrusted to it. Rectify internal control issues in certain areas. The federal government faces many internal control problems. For example, assessments continue to identify long-standing, as well as new, material weaknesses. Improper payments continue to be a long-standing internal control issue. And finally, material weaknesses continue to prevent GAO from rendering an opinion on the U.S. government's consolidated financial statements. Improve financial management systems. The federal government has made unsuccessful efforts to implement new financial management systems at several agencies and spent billions of dollars on failed systems. Moreover, in fiscal year 2018, eight of 24 CFO Act agencies' still did not substantially comply with federal systems requirements. Strengthen the federal financial management workforce. With rapid changes, such as emerging technologies, it is critical for the government to identify and strategically plan for the future workforce. What GAO Recommends GAO obtained comments from OMB, the Department of the Treasury, and the Office of Personnel Management and has incorporated their comments as appropriate. As GAO finalizes its work for issuance next year, it will consider feedback on its work in making recommendations related to the opportunities for enhancement, as appropriate.
gao_GAO-20-90
gao_GAO-20-90_0
Background AFSOC is the Air Force component of U.S. Special Operations Command and is responsible for providing Air Force capabilities and forces to support special operations activities. Special operations are operations requiring unique modes of employment, tactical techniques, equipment, and training often conducted in hostile, denied, or politically sensitive environments. Demand for AFSOC capabilities, including those provided by the ARC, is identified as part of the Department of Defense’s (DOD) Global Force Management process for assigning and allocating forces to meet global requirements. This process allows the Secretary of Defense to strategically manage forces—including the military services, conventional forces, and special operations forces—to support strategic guidance and meet combatant commander requirements. As part of this process, the Joint Staff validates requirements for forces. U.S. Special Operations Command, as the joint force provider, is responsible for identifying and recommending forces to support special operations requirements. U.S. Special Operations Command coordinates with its service component commands, including AFSOC, to determine which capabilities and specific units are best suited to meet validated requirements for special operations capabilities. After receiving these requirements, AFSOC considers its available options to provide the capabilities needed. This consideration includes reviewing active duty and reserve component units that provide specific sets of capabilities, such as intelligence, surveillance, and reconnaissance; personnel recovery; and radio and television broadcasting for psychological operations. If AFSOC, in conjunction with Headquarters Air Force, determines that the best solution to meet a requirement is to use capabilities from the ARC, it can rely on either volunteerism or involuntary recall to active duty—referred to as involuntary mobilization—to activate the needed forces. These two types of activation are described below. Volunteerism. The Secretary of the Air Force is authorized to activate ARC personnel on active duty with the consent of those individuals; however, the consent of the state governor is required for the voluntary activation of ANG personnel. According to Joint Publication 4-05, Joint Mobilization Planning (Feb. 21, 2014), volunteerism is important because it enables a service to fill required positions with reserve component personnel without its counting against the statutory limits related to involuntary mobilization. However, the guidance also states that volunteerism should be used judiciously, because excessive use of volunteers removes personnel from reserve component units, which could result in a reduction of the unit’s readiness in the event of unit mobilization. Another factor that mobilization planners must take into account is dwell time policy in relation to deployments. Furthermore, the Air Force has established specific goals for managing the operational tempo of its forces, and planners need to consider this factor as well. Involuntary Mobilization. Any unit or individual of a reserve component may be ordered to active duty under multiple mobilization statutory authorities under Title 10 of the U.S. Code that vary regarding the number of personnel who can be mobilized, the duration of the mobilization, and the approval authority. For example, section 12304 of Title 10, U.S. Code, provides authority to the President to involuntarily activate up to 200,000 members of the selected reserve for up to 365 days to augment active forces for an operational mission or in response to certain emergencies. AFSOC’s Mobilization Process Does Not Fully Support ARC Needs for Timely and Reliable Information Air Force Has Established Guidance and Processes for Mobilizing the ARC AFSOC is required to follow Air Force guidance for accessing ARC units and personnel. The Air Force guidance implements DOD Instruction 1235.12, Accessing the Reserve Components (RC), which establishes the overarching policies and procedures for accessing the reserve components for all military departments. When AFSOC officials determine that ARC capabilities are the appropriate option for a given special operation requirement, their access to the reserve component is governed by Air Force Instruction 10-301, Managing Operational Utilization Requirements of the Air Reserve Component Forces. This instruction outlines roles and responsibilities for managing requirements for reserve component capabilities accessed through both involuntary mobilizations and volunteerism. Among other things, it establishes that AFSOC use the reserve component in a cyclical or periodic manner that provides predictability to ARC individuals, to the individual’s employer, and to the combatant command receiving the capabilities. The process for accessing the reserve component through involuntary mobilization is further outlined in Air Force Instruction 10-402, Mobilization Planning. This guidance implements and expands on the specific timelines for particular milestones during the mobilization process established in DOD Instruction 1235.12, such as the identification of the types of capabilities required and of the unit responsible for providing them. These timelines vary, depending on whether the requirement for capabilities is known well ahead of mobilization—a rotational, or preplanned, requirement—or, conversely, is emergent. Rotational or preplanned requirements: AFSOC must provide the reserve component with a request for particular capabilities at least 330 days prior to the mobilization, to allow ANG or AFR officials to identify the specific individuals who are available to support the request. Air Force guidance communicates the time frames in which reserve component personnel are to receive their mobilization orders. Specifically, AFSOC is required to submit requests to mobilize the ARC to Air Force headquarters to provide the Secretary of the Air Force enough time to approve the request; and then to communicate with ANG and AFR in sufficient time to provide personnel with their mobilization orders at least 180 days prior the start date of rotational or preplanned requirements. Emergent requirements: AFSOC is required to submit requests so that personnel receive notification at least 120 days prior to the mobilization date. In comparison, there are no specific time frames in the guidance for accessing the reserve component through volunteerism. The guidance generally discusses volunteerism as an approach that allows for ARC personnel to quickly respond to requests for forces. AFSOC officials told us that they have observed an increase in requests from ARC units to use involuntary mobilizations rather than rely on the use of volunteerism, and that they anticipate this trend to continue, since involuntary mobilizations afford more predictability than do voluntary deployments. As such, involuntary mobilizations help personnel manage the frequency of time spent away from home and maximize their access to military medical and retirement benefits. Specifically: Managing time away from home: Air Force guidance limits the frequency of involuntary mobilizations for an individual to a standard of five periods of time spent at home for every one period spent involuntarily mobilized. For example, an individual involuntarily mobilized for 90 days would not be available to AFSOC for involuntary mobilization for another 450 days after the individual’s return. This provides ARC personnel with some assurance that they will not deploy again for a specific window of time, unless they volunteer to do so. We have previously reported on challenges faced by DOD in setting policies to establish thresholds and track the total time individual servicemembers may be away from home, including for exercises, training, and deployment. We found that, with the exception of the Navy and U.S. Special Operations Command, the services either were not enforcing or had not established specific and measurable thresholds in their policies. Additionally, we found that DOD lacked reliable data for tracking the total time individual servicemembers spent away from home. We recommended that DOD clarify its policy to include specific and measurable department-wide thresholds and take steps to emphasize the collection of complete and reliable data. DOD concurred with our recommendation. Medical and retirement benefits: Involuntary mobilization can also maximize the window during which personnel receive medical and retirement benefits. All ARC personnel are eligible for benefits up to 180 days prior to their involuntary mobilization or voluntary deployment. However, to receive these benefits the individual must also have been issued mobilization orders identifying the mobilization date or, for a volunteer, the deployment date. As previously discussed, Air Force guidance identifies notification time frames designed to provide ARC personnel involuntarily mobilized to support rotational or preplanned requirements with their orders at least 180 days prior to the mobilization start date. This time frame allows personnel to receive these benefits for the entire time they are potentially eligible. By contrast, personnel who are involuntarily mobilized for emergent requirements are supposed to receive their orders with at least 120 days’ notice, and, according to AFSOC officials, volunteers can receive as little as one week’s notice. As a result, personnel may prefer involuntary mobilization, as it generally results in their receiving military medical and retirement benefits for more time than they would have received them if they had volunteered to deploy. AFSOC Has Mobilization Processes but Faces Difficulties in Providing the ARC with Timely and Reliable Information about Requirements AFSOC has mobilization processes that follow Air Force guidance, but it faces difficulties in implementing these processes. Specifically, we found AFSOC faces challenges in (1) consistently providing ARC units and personnel with timely notifications regarding anticipated demand for their capabilities; (2) coordinating with ANG and AFR commands on potential requirements for ARC capabilities; and (3) sharing reliable information about mission requirements and resources with ARC units and personnel. AFSOC Has Not Always Provided Timely Notification to ARC Units and Personnel The notifications that AFSOC gives ARC units or personnel of anticipated demand for their capabilities generally do not meet the notification time frames associated with involuntary mobilizations for non-emergent requirements, thereby impeding ARC units’ ability to prepare for deployments. Officials at three of the four reserve component units we spoke with told us that AFSOC routinely provides units with limited notice of requirements for capabilities, even though they predominately support preplanned requirements that are known to AFSOC well in advance of their execution. Therefore, the officials stated, AFSOC should have sufficient time to identify and communicate the requirement for ARC capabilities to reserve component units to enable them to meet required time frames (for example, no less than 180 days in the case of non- emergent requirements). However, according to these officials, they routinely receive 90 or fewer days’ notice of when they are expected to provide capabilities for a given requirement. Due to this truncated time frame, the requirement must either be staffed using volunteers or receive approval from the Secretary of Defense to involuntarily mobilize reserve component personnel with limited notice. Receiving limited notification can create challenges for the ARC unit providing the capabilities for AFSOC requirements. For example, officials at one unit we spoke with stated that they requested that AFSOC provide at least 9 months’ notice prior to a mobilization to ensure that personnel received adequate training, because the unit provides a range of specialized capabilities. However, officials stated that what they generally received was 60 to 90 days’ notice, and that within this time frame the unit faced challenges in obtaining access to the equipment needed to train personnel for specific missions. Officials at another unit we spoke with stated that since 2015 they had received 60 or fewer days’ notice for their support of AFSOC requirements, one of which was an involuntary mobilization supporting a non-emergent requirement. An official explained that while AFSOC’s communication of requirements and planning of involuntary mobilizations has improved over time, the unit expects that orders for its next mobilization will be provided with fewer than 180 days’ notice. The official explained that in addition to limiting ARC personnel’s access to medical and retirement benefits, the abbreviated time frames make it difficult for them to coordinate their absences with their civilian employers. AFSOC officials acknowledged that they have been late to notify units in the past and identified this as an area in which they are working to improve. The officials explained that in some instances the late notification is a result of factors outside of AFSOC’s control, such as instances in which the Secretary of Defense’s process for approving requirements is delayed. AFSOC Has Not Always Coordinated Directly with ANG and AFR Commands We identified concerns regarding AFSOC’s practice of communicating directly with reserve component units, rather than formally coordinating with ANG and AFR commands, to develop potential requests for ARC unit capabilities. For example, AFR officials stated that geographic proximity to AFSOC frequently results in one unit’s receiving informal requests from AFSOC for its capabilities. That unit provides remotely piloted aircraft capabilities, which do not require personnel to deploy overseas. Officials explained that AFSOC will contact that unit directly to request capabilities to supplement the active duty personnel completing the same mission, but commonly AFSOC will provide only a few days’ notice prior to the requirement. According to these officials, personnel generally respond to these requests by volunteering with limited advance notice. AFSOC officials stated that communicating informally with the units to determine the availability of their personnel and capabilities enables AFSOC to expedite the identification of personnel potentially available to meet a requirement. However, headquarters officials for both ANG and AFR—who are responsible for identifying the specific personnel available to meet a requirement—stated that these indirect communications impede their ability to strategically manage and appropriately resource units. For example, headquarters AFR officials identified an instance in which changes to a unit’s anticipated contribution to a mission were arranged with the unit, but not with officials at their higher headquarters at the AFR. The requirement was originally for the AFR unit to supplement an active duty unit already providing the capability for AFSOC, but was expanded to require the AFR unit to have sole responsibility for providing part of the capability. The absence of direct communication and formal coordination between AFR headquarters and AFSOC during this expansion led to differing expectations regarding the number of AFR personnel needed to provide the capability required. AFR officials stated that as a result of limited transparency into future requirements for that unit, AFR headquarters did not request the appropriate level of funding for the unit, thereby limiting the resources available to support the requirement. AFSOC officials acknowledged that their use of informal communication with units instead of coordinating with ANG and AFR headquarters is not an ideal approach and could be improved. AFSOC Does Not Always Share Reliable Information about Mission Requirements and Resources We identified concerns regarding the frequency with which AFSOC has changed the information it has communicated to ARC units about anticipated requirements, thereby creating unpredictability and impeding those units’ ability to train for and ultimately provide the capabilities needed to execute those requirements. While requirements may change subject to combatant command needs, AFSOC’s availability to proactively coordinate with both the combatant command and the ARC has been limited. AFSOC officials stated that, due to their limited capacity to manage involuntary mobilizations, they are often dedicating time only to those mobilizations that require urgent attention, as opposed to refining the details of the requirement and coordinating with the units in advance of the mobilization. ARC officials stated that the unpredictability resulting from the changes that occur can introduce challenges to the units’ ability to execute requirements. For example, officials at one unit stated that the location of a previous requirement changed at least three times in the 60 days preceding its involuntary mobilization. Officials explained that changes to the location of the requirement meant that the capabilities required by AFSOC also changed, because the unit provides intelligence, surveillance, and reconnaissance capabilities that need to be supported by specific communications equipment. Depending on the location, this equipment may already be in place, or it may be that the unit must bring it with them. In a different instance, the same unit arrived at a location to provide its intelligence, surveillance, and reconnaissance capabilities and found that the location lacked the communications equipment the unit needed to effectively use its capabilities. Further, ARC officials explained that changes regarding what capabilities are needed can create training challenges unique to the reserve component. ARC unit officials explained that while reserve component personnel maintain a standard level of readiness at all times, deployments may require them to train to a specific skill set to meet a mission requirement. For example, special tactics squadrons supporting AFSOC requirements can support three different mission sets, each of which may require specialized training to prepare for a specific mission, according to unit officials. Given the nature of the reserve component, these personnel have to complete this training during the limited windows of time in which they are called in from their full-time civilian jobs. As a result, the ARC has limited flexibility in responding to changes in training requirements. AFSOC officials acknowledged that the ARC can face challenges in meeting training requirements and that the advanced planning associated with involuntary mobilizations can help ensure that units have enough time to meet training requirements. Other Air Force Entities Use Alternative Approaches to Planning, Coordinating, and Executing Involuntary Mobilizations, but AFSOC Lacks the Organizational Capacity Other Air Force entities that provide ARC capabilities to meet Air Force requirements through mobilization have established alternative approaches to initiating, planning, and coordinating their respective requirements for reserve component capabilities. Specifically, officials from Air Combat Command and Air Mobility Command, which are Air Force components similar to AFSOC regarding mobilization of ARC units, described entities established within their operations departments to coordinate with the ARC when implementing the involuntary mobilization process. These entities each consist of four to five individuals who are tasked on a full-time basis with ensuring that the reserve components are utilized in a predictable manner. The efforts of these entities include coordinating with the ARC to create plans that cover at least 2 years of anticipated rotational and preplanned requirements. While Air Combat Command and Air Mobility Command officials stated that they are responsible for coordinating a larger number of mobilizations than AFSOC coordinates, they noted that all three follow the same Air Force guidance with regard to the involuntary mobilization process. Officials from an ARC personnel recovery unit that supports Air Combat Command missions highlighted the benefits of the predictability that comes from Air Combat Command’s planning efforts. According to those officials, anticipated mobilizations are communicated to them in a schedule that covers a span of 5 years. More than a year before the unit is scheduled to involuntarily mobilize, Air Combat Command communicates the specifics of the requirement for the mission. The officials stated that, in their experience, these details rarely change once they have been communicated to the unit. By contrast, as previously discussed, we spoke with officials from an ARC special tactics squadron that provides AFSOC with capabilities similar to those of the personnel recovery unit described above, who stated that they regularly receive only 60-90 days’ notice prior to being deployed. They stated that they face difficulties in adequately training personnel to provide capabilities within these time frames. AFSOC officials stated that this issue is driven in part by the fact that units coordinate directly with requesting commands to fill their desired requirements. According to AFSOC officials, AFSOC does not have a headquarters entity dedicated to managing the planning, coordination, and execution of reserve component capabilities because, until recently, AFSOC did not use its reserve components to support ongoing missions to the extent that they do today. As a result, it was not considered necessary to have an organizational entity dedicated to managing involuntary mobilizations. Instead, AFSOC assigned the roles and responsibilities associated with initiating, planning, and coordinating ARC mobilizations within its overall process for managing AFSOC’s assignment and allocation of forces. AFSOC and ARC officials stated that under this process, a single individual at AFSOC is responsible for managing the involuntary mobilizations as a secondary duty. AFSOC officials stated that, given the scope of other assigned responsibilities, this individual focuses on managing involuntary mobilizations about half of one day in a work week. According to the officials, having a limited staff dedicated to initiating, planning, and coordinating involuntary mobilizations results in AFSOC’s responding to issues as they become urgent and impedes its ability to utilize the ARC in a predictable and stable manner. AFSOC officials also stated that the shift to using the ARC to support AFSOC’s steady state requirements, along with the increasing use of involuntary mobilizations to access ARC capabilities, have exposed the limitations of their capacity to manage involuntary mobilizations. These officials added that creating a more robust organizational capacity to manage the involuntary mobilization of the reserve component could counteract some of the challenges they have experienced in providing timely notification to ARC units, directly coordinating with ANG and AFR commands, and identifying and communicating reliable information about requirements to ARC units. AFSOC officials attribute the challenges faced in implementing AFSOC’s involuntary mobilization processes to the absence of adequate capacity to manage involuntary mobilizations. Specifically, they acknowledged that with additional capacity they would be better positioned to undertake the efforts needed to (1) provide more timely notification to ARC units, (2) coordinate with ANG and AFR commands, and (3) increase communication with the commands generating requirements. While officials acknowledged that some last-minute changes are unavoidable, they told us that having more personnel dedicated to AFSOC’s mobilization process could potentially lead to having more timely notifications or better indications of imminent changes. Further, some factors that can affect the involuntary mobilization process fall outside of AFSOC’s control, such as delays in the decision making process at the Secretary of Defense level and changes in combatant commander requirements. Although AFSOC cannot control all factors that affect involuntary mobilization of the ARC, increasing its capacity to manage involuntary mobilizations would improve its ability to anticipate and proactively address the challenges introduced by external factors. AFSOC officials stated that in recognition of this need, the command’s operations center has submitted multiple requests for additional resources to the headquarters in order to create a more robust organizational capacity to manage the involuntary mobilization of the reserve component. For example, the request submitted in January 2019 stated that AFSOC currently does not provide the support and guidance that ARC units need to properly execute the involuntary mobilization process. The request sought one additional full-time position dedicated to managing involuntary mobilizations and coordinating involuntary mobilizations with the ARC. Although AFSOC officials told us that the request was validated by AFSOC leadership, the validation of a request does not ensure that it will receive funding. After competing against other funding requests from other Air Force components, the Financial Management Board did not fund the position in fiscal years 2018 or 2019 because those other requests received higher priority. As an alternative to the full-time position requested by the operations center, AFSOC officials identified ongoing efforts to coordinate with the ANG that would result in the ANG’s allocating personnel to fill a temporary position at AFSOC. The individual in this position would be responsible for supporting the mobilization process. AFSOC officials stated that such an arrangement would help address the capacity challenges they currently face, but also noted that it would be a short-term solution, and highlighted that the individual filling the position would need to be familiar with AFSOC, ANG, and AFR processes to execute his or her duties. In addition, AFSOC officials could consider realigning existing capacity within the command to directly address the limited capacity to manage involuntary mobilizations. However, AFSOC officials emphasized that the command as a whole currently operates with limited capacity. In the absence of the Air Force developing additional AFSOC organizational capacity dedicated to the planning, coordination, and execution of involuntary mobilizations, AFSOC will continue to be impeded in its ability to manage involuntary mobilizations in accordance with Air Force guidance, including providing the notice required to access the ARC through involuntary mobilization in support of preplanned or rotational requirements. Additionally, at its current capacity AFSOC will likely face increasing challenges in providing timely notification to ARC units, coordinating with ANG and AFR commands, and enhancing communication with the commands generating requirements to help solidify mission specifics, as the number of involuntary mobilizations quadruple by 2021, as estimated by AFSOC officials. As a result, units may not be fully prepared to support requirements or able to effectively conduct their mission once in theater. Further, AFSOC will continue to be impeded in coordinating with ANG and AFR commands in a manner that enables the ARC to strategically manage and resource units in support of AFSOC’s requirements. The ARC Does Not Provide Complete Information to AFSOC on Units Available for Mobilization or on Voluntary Deployments The ARC Does Not Have Consolidated Information on Reserve Component Units Available to Support Special Operations Activities While the Air Force’s force-generation model provides the ARC with a 24- month picture of the units it anticipates will be used to meet potential Air Force-wide deployments, the ARC does not have a comparable model with information on which ARC units could be used to support AFSOC requirements for special operations activities. According to officials, the ARC does not have a force-generation model for two reasons. First, while the Air Force model works for Air Force-wide requirements, it does not apply to special operations-specific requirements because they are unique to the Air Force’s special operations component, AFSOC. According to AFSOC officials, their command deploys units and personnel differently from typical Air Force units in order to maximize the number of requirements they can support with a smaller force. Second, the ARC has historically supported special operations activities using volunteerism, which is much more flexible than involuntary mobilization and requires less upfront planning or notification. As a result, ARC officials did not feel the need to develop a force-generation model for special operations requirements. ANG and AFR officials told us that ARC units will sometimes keep a unit-level schedule of their potential deployments, but that information is not available in a consolidated or consistent format. AFSOC officials added that any force-generation model for special operations should consider the limited capacity of some special operations capabilities. Officials stated that some capabilities in the ARC are limited to one unit, which results in AFSOC deploying parts of units rather than the whole unit to cover more requirements. ANG and AFR officials agreed that a force-generation model regarding future deployments could help identify which ARC units would be susceptible to deploy during a given period of time, which would be beneficial for planning ARC deployments. Consolidated information on potential unit deployments would provide units with advanced notification, making it easier to accomplish deployment preparation activities and helping ARC personnel make arrangements for their potential deployments. For example, ANG officials told us that advanced notification to units can give the ARC more time to incorporate needed training into drill training. Furthermore, unit personnel would also have more time to make arrangements with civilian employers or in their personal lives, making their transition to active duty easier and making it more likely that they will view mobilizations favorably in the future. Additionally, these officials stated that, with such a model, ANG and AFR could more easily identify and communicate which ARC units would be available for mobilization to support special operations activities. AFSOC officials stated that, in turn, this could provide AFSOC with more certainty that it would have access to ARC forces when needed. According to ANG and AFR officials, AFSOC officials have expressed some concerns about whether their command will have access to ARC forces. Specifically, since a substantial part of the total Air Force capability resides in the ARC, AFSOC officials are not certain that the capacity of ARC units supporting special operations will be able to meet future requirements. The officials added that by identifying units or individuals susceptible for deployment in advance, AFSOC would have more confidence in the ARC’s ability to support the command’s requirements. According to Air Force guidance, a predictable force-generation model is used to ensure proper force readiness and rapid responses to emerging crises. Specifically, Air Force Instruction 10-401, Air Force Operations Planning and Execution, calls for the Air Force and its components, including the ANG and AFR, to manage the deployment of its forces in order to meet global requirements while maintaining the highest possible level of overall readiness. The instruction calls for the Air Force to accomplish this task by establishing a force-generation model that can be used to manage the rhythm of force deployments to meet global combatant command requirements. The intent of the force-generation model is to establish a predictable, standardized pattern to ensure that forces are properly organized, trained, equipped, and ready to sustain capabilities while rapidly responding to emerging crises. ANG officials told us that they have taken some initial steps to create a force-generation model and consolidate the various unit-level schedules of ARC forces supporting special operations activities. Specifically, the ANG advisor to AFSOC was developing a consolidated schedule of ARC units intended for use by AFSOC to identify ANG units that could mobilize to support AFSOC requirements. However, according to AFSOC officials, the ANG advisor was expected to retire soon, and we found that ANG headquarters officials were not aware of this effort, and there were no plans to institutionalize it. AFR officials were likewise not aware of any similar effort to consolidate schedules for their units’ different capabilities to support special operations activities. Without having a method for providing consolidated information on reserve component units that are available for deployment, the ARC will not have the information it needs to successfully plan its deployments, or easily identify and communicate to AFSOC which of its units are or will be available for mobilization. Furthermore, AFSOC officials may continue to have concerns that they will not have access to high demand ARC capabilities to deploy under a mobilization. The ARC Does Not Have Complete Information on Voluntary Deployments According to officials, although ANG and AFR units have a general understanding as to how many volunteers they have supporting special operations requirements at the unit level, the ANG and AFR lack a mechanism for tracking volunteer deployment rates across the ARC. Specifically, information on reserve components’ volunteer deployments is not available in a form that facilitates tracking in order to understand rates of volunteering or the contributions made by the ARC in supporting special operations activities, according to officials. The Air Force requires the ANG and AFR to track key data to ensure proper management of ARC utilization and mission execution. Specifically, Air Force Instruction 10-301, Managing Operational Utilization Requirements of the Air Reserve Component Forces, calls for the Air Force to identify full mission requirements for ARC utilization by collecting, tracking, and organizing relevant data and prioritizing requirements. It also states that these data are intended to aid in allocating funding, matching units to requirements, executing requirements, assessing each step of the process, and forecasting future requirements. Additionally, Standards for Internal Control in the Federal Government establishes that management should obtain relevant data from reliable internal and external sources in a timely manner to facilitate effective monitoring. ANG and AFR officials told us that voluntary deployments are more difficult to track than are involuntary mobilizations. Specifically, the statutory requirements for involuntarily mobilizing ARC units or personnel make tracking them simpler. For example, according to officials the Secretary of Defense is required to approve or be notified of involuntary mobilizations, and ANG and ARC units receive specific orders, all of which are tracked closely. Voluntary deployments, however, do not have the same approval requirements. Nevertheless, ANG and AFR officials told us that ARC units may have some information, as detailed below, on the numbers of volunteer deployments, although this information provides only a partial picture of volunteerism. Travel System Data: Officials from a reserve component unit we visited reported that some of the information on voluntary deployments could be compiled from travel systems used to send ARC units and personnel overseas. However, these officials added that matching travel records to the volunteer status of individuals could be time-consuming, because the travel systems are not designed to perform this function. Furthermore, unit officials told us that this travel information would be incomplete even if it were compiled, because it would not include units and individuals supporting operational requirements from their home stations—that is, not traveling outside their normal locations. For example, according to unit officials, personnel supporting remote piloted aircraft would not be included in the information collected from the travel systems because they do no travel outside their normal duty stations to carry out their missions. Without travel orders, the system would not show these types of deployments. Unit officials told us that there could be several cases like this one in which the information compiled from the travel system or other sources could be incomplete. Man-Day Estimates: An AFSOC official told us that the system used to track military personnel appropriation man-days could be another source used to track volunteerism among ANG and AFR units. According to this official, AFSOC uses a data system to transfer man- days to the volunteering ARC unit. This official stated that the system used to make these transfers may contain the information needed to track volunteerism, but acknowledged that no one at AFSOC was using the system for this purpose. Furthermore, ANG and AFR officials confirmed that the data system is not currently used for tracking rates of volunteerism among ARC units. According to AFR officials, tracking volunteerism would allow them to more easily document the ARC’s contributions to support special operations and evaluate whether ARC forces were being effectively utilized. Specifically, ANG and AFR officials expressed concerns that different rates of individual volunteerism within and across ARC units may result in a misleading picture of overall unit utilization. In some cases, incomplete data on volunteerism can result in overstating unit contributions. For example, the unit-level figures regarding deployments are actually averages of all the individuals in the unit. Officials expressed concerns that as a result of using averages, units may appear to be more highly utilized than they actually are, due to the high rates at which some individuals from the unit volunteer to deploy. According to ANG and AFR officials, some ARC personnel volunteer at high rates because they prefer the additional income or benefits from these deployments, while other personnel from the same units may prefer to deploy less often. ARC officials expressed concerns that this disparity may not be immediately visible to ARC and AFSOC leadership. AFSOC officials told us that they share some of these concerns. Other officials expressed concerns that without good information on volunteerism rates, the ANG and AFR could not effectively manage operational tempo goals. To measure operational tempo, DOD has established policies relating to how long military personnel are deployed versus at home (referred to as dwell time, or dwell). For example, ARC personnel who deploy for 7 months and are in dwell for 14 months would have a deployment-to-dwell ratio of 1:2. For ARC units, DOD also tracks the mobilization-to-dwell ratio, which is the ratio of how long ARC personnel are involuntarily mobilized versus not mobilized. DOD guidance establishes that the mobilization-to-dwell ratio for ARC units should be 1:5. According to ANG and AFR officials, ARC voluntary deployments are not factored into the dwell calculations for either ratio, making it more difficult to ensure that deployments do not fatigue ARC forces. Additionally, Special Operations Command policy specifies that ARC units supporting special operations should maintain the same deployment cycle as active duty units, which as a goal should be no less than a 1:2 deployment-to- dwell ratio. ANG, AFR, and AFSOC officials agreed that tracking volunteer deployment rates more comprehensively and consistently would provide greater perspective on how ARC units are utilized and help them more effectively manage their operational tempo goals. Officials stated that an additional consequence of having incomplete data on volunteerism is that the overall contributions of the ARC can be understated, because the full range of support that ARC units and personnel are providing is not being documented. For example, a report used by the Air Force to track force contributions from its components, including the ARC, shows that the AFR contributed forces to support special operations activities for about 6 months of an approximately 4- year period. However, according to AFR officials, the command’s contribution to support special operations activities was much higher than what is documented in the report. The officials stated that AFR also provided volunteer support to AFSOC over the entire period but that its contributions are not fully reflected in the report, because volunteers supporting an AFSOC-assigned mission are counted among the contributions made by other active duty forces, rather than by AFR. Without complete information on volunteer deployment rates among reserve component forces, the ANG and AFR may face difficulties in ensuring the effective utilization of their forces to support special operations activities, documenting force contributions from the ARC, and managing operational tempo and deployment-to-dwell goals. Further, the ARC will not have the information it needs to ensure effective management of its force utilization and mission execution. Specifically, it will not be able to determine whether units are being fully utilized, because of the distorted or incomplete volunteerism information. Conclusions With a substantial part of the total Air Force capability residing in the ARC, AFSOC relies on mobilized ARC forces to support its operations. Furthermore, AFSOC’s increasing use of ARC as an operational reserve has highlighted the importance of the ARC’s and AFSOC’s planning and information-sharing efforts. However, AFSOC’s implementation of its mobilization process impedes its ability to provide the ARC with timely notification of mobilizations, coordinate with ANG and AFR commands, and share reliable information about requirements with the ARC. Without resolving AFSOC’s organizational capacity challenge in managing AFSOC requirements for reserve capabilities, AFSOC’s implementation of this process is unlikely to improve. AFSOC’s use of the ARC is also affected by the unavailability of complete information regarding both the units available to mobilize and voluntary deployment rates. Specifically, while the ARC is able to identify the units anticipated to be available to support non-special operations requirements, it does not have a method for communicating consolidated information on the availability of units for special operations requirements. Without such a method, AFSOC and the ARC do not have easily accessible information about the current and future availability of ARC units to support special operations requirements. In addition, voluntary deployments are a key piece of the ARC’s support of AFSOC requirements. However, the ARC has not developed a mechanism for tracking the rate at which they occur. Without tracking volunteer deployment rates, the ARC is limited in its ability both to ensure that its forces are effectively utilized and to communicate the level of contribution made by ARC volunteers in support of special operations requirements. Recommendations for Executive Action We are making three recommendations to DOD: The Secretary of the Air Force, in coordination with ANG and AFR, should ensure that AFSOC has the organizational capacity to effectively initiate, coordinate, and execute ARC mobilizations, to include ensuring timely and reliable notification of requirements to those units. (Recommendation 1) The Secretary of the Air Force should ensure that the ANG and AFR develop a method for providing AFSOC with consolidated information regarding units available for immediate and future mobilizations to support special operations activities, such as the Air Force provides to its units with its force-generation model. (Recommendation 2) The Secretary of the Air Force should ensure that the ANG and AFR develop a mechanism for tracking volunteer deployments to better manage ARC force utilization. (Recommendation 3) Agency Comments In written comments on a draft of this report, DOD concurred with one recommendation and partially concurred with two recommendations. DOD’s comments are restated below and reprinted in appendix I. DOD also provided technical comments, which we incorporated where appropriate. DOD concurred with the first recommendation that the Secretary of the Air Force, in coordination with ANG and AFR, should ensure that AFSOC has the organizational capacity to effectively initiate, coordinate, and execute ARC mobilizations, to include ensuring timely and reliable notification of requirements to those units. In its response, DOD stated that the Air Force continues to balance manning requirements across the spectrum of operations. DOD also stated that fully manning AFSOC for this staff function would be helpful, whether additional manpower is programmed or AFSOC mitigates internally by reallocating manpower. We believe that fully manning AFSOC for this staff function, if fully implemented, would meet the intent of the recommendation. In its comments on this recommendation, DOD also stated that the ARC has a process in place to provide timely notification to ANG and AFR units once requirements are known. The department added that the ANG implemented the Agile ARC Mobilization Process on June 1, 2019, which streamlined policy and procedural chokepoints and improved notification timelines by an average of 60 days. We note that, while improvements in the notification timelines would be beneficial, it is too soon to understand the long-term effect of the implementation of this process. DOD partially concurred with the second recommendation that the Secretary of the Air Force should ensure that the ANG and AFR develop a method for providing AFSOC with consolidated information regarding units available for immediate and future mobilizations to support special operations activities, such as the Air Force provides to its units with its force-generation model. In its comments, DOD stated that the AFR currently provides AFSOC with information on units available, using Reserve Component Periods, and that the AFR will assess whether re- posturing in multiple Reserve Component Periods will provide a portion of capability with greater flexibility. We agree that this is a reasonable approach. However, as we noted in our report, consolidated information on reserve component units that are available for deployment could provide ARC units with advanced notification, making it easier to accomplish deployment preparation activities and help ARC personnel make arrangements for their potential deployments. Additionally, DOD stated that current information technology initiatives with the Air Force Integrated Personnel and Pay System will eventually provide the Air Force with functionality allowing a single, integrated system of software suites. According to the department, Air Force Integrated Personnel and Pay System will support a rapid and accurate information flow from the first identification of a requirement through the processing and delivering of orders, allowing the Air Force to start pay and benefits in an auditable manner. However, DOD did not identify a timeline for when that system would be available. We believe that improvements in the flow of information regarding ARC unit availability are necessary and would help to ensure that the ARC can successfully plan deployments, or easily identify and communicate to AFSOC which of its units are or will be available for mobilization. We believe that if this planned system is implemented as described, it would meet the intent of the recommendation. DOD partially concurred with the third recommendation that the Secretary of the Air Force should ensure that the ANG and AFR develop a mechanism for tracking volunteer deployments to better manage ARC force utilization. In its response, DOD stated that tracking volunteer deployments requires timely information from AFSOC to properly identify the requirements, establish expeditionary ARC units, and document the transaction when ARC members are activated. Further, it stated that in the short term, the AFR will work with AFSOC on further developing use of the Air Force Consolidated Planning Schedule to better define requirements. While coordination with AFSOC could help improve the tracking process, we believe that the ANG and AFR also need to develop a mechanism for tracking volunteer deployments to better manage ARC force utilization. Additionally, DOD noted that the planned information technology initiative, which it described in its response to our second recommendation, could also have benefits for tracking voluntary deployments. We believe that if the planned system is able to fully track voluntary deployments, it would meet the intent of the recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary for Personnel and Readiness; the Chief of the National Guard Bureau; and the Commanders of Special Operations Command, Air Force Special Operations Command, and Air Force Reserve Command. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431, or russellc@gao.gov. Contact points for our respective offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Defense Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, individuals who made key contributions to this report include Jim Reynolds, Assistant Director; Adam Anguiano, Tracy Barnes, Adrianne Cline, Shylene Mata, Walter Vance, and Cheryl Weissman.
Why GAO Did This Study Over the past decade the Air Force has increasingly relied on the ARC to meet operational requirements. The ARC is composed of two entities—the Air National Guard (ANG) and the Air Force Reserve (AFR)—which together comprise a substantial part of the total Air Force capability. AFSOC relies on either volunteerism or involuntary mobilization to activate ARC units. House Report 115-676, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2019, contains a provision for GAO to assess ANG and AFR involuntary mobilization plans to support special operations. GAO evaluated the extent to which (1) AFSOC's mobilization process provides the ARC with timely and reliable forecasts of planned utilization of units and personnel; and (2) the ARC identifies and communicates information to AFSOC on the units and individuals available for mobilization or on voluntary deployments. What GAO Found The Air Force Special Operation Command's (AFSOC) mobilization process does not fully support Air Reserve Component (ARC) needs for timely and reliable information. While AFSOC has established mobilization processes in line with Air Force guidance, the command faces difficulties, as follows: consistently providing ARC units and personnel with timely notifications regarding anticipated demand for their capabilities; coordinating with ARC commands on potential requirements for ARC capabilities; and sharing reliable information about mission requirements and resources with ARC units and personnel. According to AFSOC officials, these difficulties stem from AFSOC's limited organizational capacity to conduct the planning, coordination, and execution of involuntary mobilizations (that is, ARC units or personnel ordered to active duty). Other Air Force entities that provide ARC capabilities to meet Air Force-wide requirements have established the capacity within their operations departments to coordinate with the ARC when implementing the involuntary mobilization process. AFSOC officials stated that because AFSOC did not, until recently, regularly use involuntary mobilizations to access the ARC, it was not considered necessary to have an organizational entity dedicated to managing involuntary mobilizations. AFSOC officials stated that the command's operations center has submitted requests to its headquarters for additional resources toward creating such organizational capacity, but the requests were not funded in fiscal years 2018 or 2019, as other requests received higher priority. According to officials, AFSOC is currently exploring possible short-term solutions. In the absence of the organizational capacity to conduct the planning, coordination, and execution of involuntary mobilizations, AFSOC will continue to be impeded in providing the notice required to access the ARC in support of requirements. The ARC does not provide AFSOC with complete information regarding which of its units could be used to support AFSOC requirements for special operations activities. The Air Force uses a model that captures and organizes Air Force-wide requirements, but the model does not include special operations requirements, and AFSOC is expected to develop its own processes for its unique requirements. According to AFSOC and ARC officials, the ARC has not developed a method for capturing and organizing special operations requirements because it has historically supported special operations activities using volunteerism, which is more flexible and requires less up-front planning. Consolidated information on potential unit deployments would provide units with advanced notification, facilitating deployment preparation activities and helping personnel make arrangements with civilian employers or in their personal lives. Without a method to provide consolidated information on reserve component units available for deployment, the ARC will not have the information it needs to successfully plan its deployments, or to easily identify which of its units will be available for mobilization. What GAO Recommends GAO is making three recommendations, including that the Air Force should ensure that AFSOC has the organizational capacity to effectively initiate, coordinate, and execute ARC mobilizations; and should develop a method for providing AFSOC with consolidated information regarding units available for mobilizations. DOD concurred with one of these recommendations and partially concurred with two, stating that some information is being shared and a planned initiative could improve the information flow. GAO believes this initiative, if implemented, could address the intent of its recommendations.
gao_GAO-19-716T
gao_GAO-19-716T_0
Background The NASA Authorization Act of 2010 directed NASA to develop SLS, to continue development of a crew vehicle, and to prepare infrastructure at Kennedy Space Center to enable processing and launch of the launch system. To fulfill this direction, NASA formally established the SLS launch vehicle program in 2011. Then, in 2012, NASA aligned the requirements for the Orion program with those of the newly created SLS and EGS programs. Figure 1 provides details about each SLS hardware element and its source as well as identifies the major portions of the Orion spacecraft. History of Program Cost and Schedule Changes In order to facilitate Congressional oversight and track program progress, NASA establishes an agency baseline commitment—the cost and schedule baselines against which the program may be measured—for all projects that have a total life cycle cost of $250 million or more. NASA refers to these projects as major projects or programs. When the NASA Administrator determines that development cost growth within a major project or program is likely to exceed the development cost estimate by 15 percent or more, or a program milestone is likely to be delayed from the baseline’s date by 6 months or more, NASA replans the project and submits a report to this committee—the Committee on Science, Space, and Technology of the House of Representatives—and the Committee on Commerce, Science, and Transportation of the Senate. Should a major project or program exceed its development cost baseline by more than 30 percent, the program must be reauthorized by the Congress and rebaselined by NASA in order for the contractor to continue work beyond a specified time frame. NASA tied the SLS and EGS program cost and schedule baselines to the uncrewed first mission—known now as Artemis-1—originally planned for November 2018. The Orion program’s cost and schedule baselines are tied to a crewed second mission—known as Artemis-2—planned for April 2023. In April 2017, we found that given combined effects of ongoing technical challenges in conjunction with limited cost and schedule reserves, it was unlikely that these three programs would achieve the originally committed November 2018 launch readiness date. Cost reserves are for costs that are expected to be incurred—for instance, to address project risks—but are not yet allocated to a specific part of the project. Schedule reserves are extra time in project schedules that can be allocated to specific activities, elements, and major subsystems to mitigate delays or address unforeseen risks. We recommended that NASA confirm whether the November 2018 launch readiness date was achievable and, if warranted, propose a new, more realistic Artemis-1 date and report to Congress on the results of its schedule analysis. NASA agreed with both recommendations and stated that it was no longer in its best interest to pursue the November 2018 launch readiness date. Subsequently, NASA approved a new Artemis-1 schedule of December 2019, with 6 months of schedule reserve available to extend the date to June 2020, and revised the costs that it expects to incur (see table 1). Cost and Schedule Status of NASA’s Human Spaceflight Programs In June 2019, we found that within 1 year of announcing a delay for the first human spaceflight mission, senior NASA officials acknowledged that the revised Artemis-1 launch date of December 2019 was unachievable and the June 2020 launch date (which takes into account schedule reserves) was unlikely. These officials estimated that there were 6 to 12 months of schedule risk associated with this later date, which means the first launch may occur as late as June 2021 if all risks are realized. As we found in June 2019, this would be a 31-month delay from the schedule originally established in the programs’ baselines. Officials attributed the additional schedule delay to continued production challenges with the SLS core stage and the Orion crew and service modules. NASA officials also stated that the 6 to 12 months of risk to the launch date accounts for the possibilities that SLS and Orion testing and final cross-program integration and testing at Kennedy Space Center may result in further delays. As we noted in our report, these 6 to 12 months of schedule risk do not include the effects, if any, of the federal government shutdown that occurred in December 2018 and January 2019. In commenting on our June 2019 report, NASA stated that its Lunar 2024 planning activities would include an Artemis-1 schedule assessment. However, in July 2019, NASA reassigned its senior leaders responsible for human spaceflight programs. The NASA Administrator stated in August 2019 that, as a result, the agency does not plan to finalize schedule plans for Artemis-1 until new leadership is in place at the agency. Additional details follow on the status of each program, including cost, schedule, and technical challenges. SLS. As we found in June 2019, ongoing development issues with the SLS core stage—which includes four main engines and the software necessary to command and control the vehicle—contributed to the SLS program not being able to meet the June 2020 launch date. Officials from the SLS program and Boeing, the contractor responsible for building the core stage, provided several reasons for the delays. These reasons include the underestimation of the complexity of manufacturing and assembling the core stage engine section—where the RS-25 engines are mated to the core stage—and those activities have taken far longer than expected. Since our June 2019 report, based on our review of the program’s most recent status reports, NASA has reported progress across many parts of the SLS program. For example, NASA has delivered the four RS-25 engines to Michoud Assembly Facility. NASA has also completed qualification testing of all components of the boosters and reports that there is schedule margin remaining for the booster deliverables. In addition, NASA reports that Boeing has made continued progress and expects that the core stage will be complete and ready for testing in December 2019. Completion of the core stage will represent a significant milestone for the program. In June 2019, we found that that SLS program has been underreporting its development cost growth since the December 2017 replan. This underreporting is because of a decision to shift some costs to future missions while not adjusting the baseline costs downward to reflect this shift. The SLS development cost baseline established in August 2014 for Artemis-1 includes cost estimates for the main vehicle elements—stages, liquid engines, boosters—and other areas. According to program officials, because of the December 2017 replan process, NASA decided that costs included as part of the SLS Artemis-1 baseline cost estimate would be more appropriately accounted for as costs for future flights. Thus, NASA decided not to include those costs, approximately $782 million, as part of the revised SLS Artemis-1 cost estimate. However, NASA did not lower the $7 billion SLS development cost baseline to account for this significant change in assumptions and shifting of costs to future flights. This decision presents challenges in accurately reporting SLS cost growth over time. NASA’s decision not to adjust the cost baseline downward to reflect the reduced mission scope obscures cost growth for Artemis-1. In June 2019, we found that NASA’s cost estimate as of fourth quarter fiscal year 2018 for the SLS program indicated development cost growth had increased by $1 billion, or 14.7 percent. However, our analysis showed that development cost growth actually increased by $1.8 billion or 29.0 percent, when the development baseline is lowered to account for the reduced mission scope. Essentially, NASA is holding the baseline costs steady, while reducing the scope of work included in current cost estimates (see figure 2). As NASA determines its new schedule for the first mission, it is likely this cost growth will increase as additional time in the schedule leads to additional costs. In our June 2019 report, we recommended that the SLS program calculate its development cost growth using a baseline that is appropriately adjusted for scope and costs NASA has determined are not associated with the first flight, and determine if the development cost growth has increased by 30 percent or more. NASA agreed with the recommendation and NASA officials stated that they plan to implement the recommendation when new leadership is in place for the human space exploration programs. Looking ahead, based on our review of the program’s most recent status reports, completing core stage manufacturing and integration and green run testing will be the critical path—the path of longest duration through the sequence of activities in the schedule—for the SLS program. During green run testing, NASA will fuel the completed core stage with liquid hydrogen and liquid oxygen and fire the integrated four main engines for about 500 seconds. The green run test carries risks because it is the first time that several things are being done beyond just this initial fueling. For example, it is also the first time NASA will fire the four main engines together, test the integrated engine and core stage auxiliary power units in flight-like conditions, and use the SLS software in an integrated flight vehicle. In addition, NASA will conduct the test on the Artemis-1 flight vehicle hardware, which means the program would have to repair any damage from the test before flight. Orion. While the Orion program’s schedule performance is measured only to the Artemis-2 mission, we found in June 2019 that the program was not on schedule to support the June 2020 launch date for the first mission. This was due to delays with the European Service Module and component issues for the avionics systems for the crew module, including issues discovered during testing. We found that these specific problems were resolved by the time of our report, but had already contributed to the inability of the program to meet the June 2020 launch date. Since we last reported, as of August 2019, the Orion program has completed significant events including completing the crew module and the service module prior to integration and conducting a test to demonstrate the ability to abort a mission should a life-threatening failure occur during launch. The program is tracking no earlier than October 2020 for an Artemis-1 launch date but that does not reflect the ongoing agency-wide schedule assessment noted above. In June 2019, we found that the Orion program has reported development cost growth but is not measuring that growth using a complete cost estimate. In summer 2018, the Orion program reported development cost growth of $379 million, or 5.6 percent above its $6.768 billion development cost estimate. Program officials explained that the major drivers of this cost growth were the slip of the Artemis-1 launch date, which reflected delays in the delivery of the service module; Orion contractor underperformance; and NASA-directed scope increase. However, during our review, Orion program officials originally stated that this cost estimate assumes an Artemis-2 launch date of September 2022, which is 7 months earlier than the program’s agency baseline commitment date of April 2023 that forms the basis for commitments between NASA, the Congress, and Office of Management and Budget. Subsequently, during the review, program officials told us that its cost projections fund one of those 7 months. In either case, NASA’s current cost estimate for the Orion program is not complete because it does not account for costs that NASA would incur through April 2023. As of September 2019, the program was targeting October 2022 for the Artemis-2 launch. In June 2019, we recommended that the Orion program update its cost estimate to reflect its committed Artemis-2 baseline date of April 2023. In its response, NASA partially agreed with our recommendation. NASA stated that providing the estimate to the forecasted launch date— September 2022—rather than to the committed baseline date of April 2023 is the most appropriate approach. However, by developing cost estimates only to the program’s goals and not relative to the established baseline, the Orion program is not providing NASA or the Congress the means of measuring progress relative to the baseline. We continue to believe that NASA should fully implement this recommendation. Looking ahead, based on our review of the program’s most recent status reports, there is an emerging issue that may delay schedule further for the first mission. Namely, there is the risk of damage to the Orion capsule during travel to and from integrated testing at Plum Brook Station in Ohio. The program office is studying whether it will be able to safely transport the integrated crew and service modules via the Super Guppy airplane as planned or if it will have to use an alternate airplane. We will continue to monitor this effort. Beyond Artemis-1, the Orion program must also complete development efforts for future missions. For example, the Artemis-2 crew module will need environmental control and life support systems, system updates from Artemis-1, and updated software to run these new elements. EGS. At the time of our June 2019 report, the EGS program was expecting to have facilities and software ready by the planned June 2020 launch date. We found that the program had overcome many challenging development hurdles that led to previous schedule delays. These hurdles included completing and moving the Mobile Launcher—a platform that carries the rocket to the launch pad and includes a number of connection lines that provide SLS and Orion with power, communications, coolant, fuel, and stabilization prior to launch—into the Vehicle Assembly Building for the multi-element verification and validation processes. Since our June 2019 report, the program is now targeting an Artemis-1 launch date of August 2020. According to NASA officials, the delay is primarily driven by challenges encountered installing ground support equipment on the Mobile Launcher and developing software, and does not reflect the ongoing agency-wide schedule assessment. The program has operated within the costs established for the June 2020 launch date, $3.2 billion, but officials stated that NASA is reevaluating the program’s development cost performance and will establish an updated baseline when new leadership is in place. Moving forward, based on our review of the program’s most recent status reports, the program has to complete the multi-element verification and validation process for the Mobile Launcher and Vehicle Assembly Building and complete its two software development efforts. Additionally, the EGS program is responsible for the final integration of the three programs. NASA officials stated that the 6 to 12 months of risk to the June 2020 launch date includes risk associated with EGS completing this integration that includes test and checkout procedures after SLS and Orion components arrive. Officials explained that the EGS risk is based on a schedule risk analysis that considered factors such as historical pre- launch integrated test and check out delays and the learning curve associated with a new vehicle. As previously stated, our prior work has shown that the integration and test phase often reveals unforeseen challenges leading to cost growth and schedule delays. Lessons that NASA Can Apply to Better Manage its Human Spaceflight Acquisitions NASA is currently embarking on an aggressive goal to return humans to the lunar surface in 2024. To achieve this goal, NASA not only needs SLS, Orion, and EGS to have completed their first two test missions, but is also developing several new systems. These new systems include a Lunar Gateway that will orbit the moon, landers that will transport astronauts from the Gateway to the lunar surface, and new space suits. Human spaceflight projects face inherent technical, design, and integration risks because they are complex, specialized, and are pushing the state of the art in space technology. Moreover, these programs can be very costly and span many years, which means they may also face changes in direction from Administrations and the Congress. Meeting the 2024 goal will also be challenging given the effort needed to better manage SLS, Orion, and EGS, coupled with the addition of the new programs, which are likely to compete for management attention and resources. Nevertheless, our past work has identified a range of actions that NASA can take to better position its human spaceflight programs for success. Today I would like to highlight three lessons from the SLS, Orion, and EGS programs that NASA can apply to improve the management of its human spaceflight programs. Enhance Contract Management and Oversight to Improve Program Outcomes. Over the past several years, we and the NASA Office of the Inspector General have identified shortcomings related to NASA’s management and oversight of its human spaceflight contracts. These shortcomings have left NASA ill-positioned to identify early warning signs of impending schedule delays and cost growth, reap the potential benefits of competition, and achieve desired results through contractor incentives. In July 2014, we found that NASA allowed high-value modifications to the SLS contracts to remain undefinitized for extended periods—in one instance a modification remained undefinitized for 30 months. Undefinitized contract actions such as these authorize contractors to begin work before reaching a final agreement with the government on terms and conditions. We have previously found that while undefinitized contract actions may be necessary under certain circumstances, they are considered risky in part because the government may incur unnecessary costs if requirements change before the contract action is definitized. Because lack of agreement on terms of the modification prolonged NASA’s timeframes for definitizing, the establishment of contractor cost and schedule baselines necessary to monitor performance was delayed. Specifically, we found in July 2014 that, in most cases, the SLS program did not receive complete earned value management data derived from approved baselines on these SLS contracts. Earned value, or the planned cost of completed work and work in progress, can provide accurate assessments of project progress, produce early warning signs of impending schedule delays and cost overruns, and provide unbiased estimates of anticipated costs at completion. In July 2014, we also found the SLS program could be in a favorable position to compete contracts for the exploration upper stage, the upper stage engine, and advanced boosters that it expected to use on future variants of the launch vehicle. At that time, except for the RS- 25 engines, NASA’s contracting approach for the SLS program did not commit the program beyond the hardware needed for the second mission, and we found that moving forward the agency would be in a position to take advantage of the evolving launch vehicle market. We found that an updated assessment of the launch vehicle market could better position NASA to sustain competition, control costs, and better inform the Congress about the long-term affordability of the program. We recommended that before finalizing acquisition plans for future capability variants, NASA should assess the full range of competition opportunities and provide to the Congress the agency’s assessment of the extent to which development and production of future elements of the SLS could be competitively procured. NASA agreed with the recommendation, which we have identified as among those that warrant priority attention. Since we made that recommendation, NASA has awarded a sole- source contract for the upper stage engine and agency officials told us in July 2018 that they planned to incorporate additional booster development under the existing contract. This further limits an opportunity for competition for the program. Our body of work on contracting has shown that competition in contracting is a key element for achieving the best return on investment for taxpayers. We have found that promoting competition increases the potential for acquiring quality goods and services at a lower price and that noncompetitive contracts carry the risk of overspending because, among other reasons, they have been negotiated without the benefit of competition to help establish pricing. In July 2016, we found that the lack of earned value management data for the SLS Boeing core stage contract persisted. Without this information, some 4.5 years after contract award, the program continued to be in a poor position to understand the extent to which technical challenges with the core stage were having schedule implications or the extent to which they may have required reaching into the program’s cost reserves. In October 2018, the NASA Office of Inspector General reported that NASA does not require Boeing to report detailed information on development costs for the two core stages and exploration upper stage, making it difficult for the agency to determine if the contractor is meeting cost and schedule commitments for each deliverable. The NASA Office of Inspector General found that given the cost-reporting structure, the agency is unable to determine the cost of a single core stage. Internally, Boeing tracks all individual costs but submits a combined statement of labor hours and material costs through the one contract line item for all its development activities. NASA approximates costs based on numerous monthly and quarterly reviews with the contractor to track the progress of each individual deliverable. The NASA Office of Inspector General made a number of recommendations aimed at improving reporting relative to the core stage contract. Among these was a specific recommendation to separate each deliverable into its own contract line item number for tracking performance, cost, and award fees. NASA concurred with this recommendation and is currently renegotiating the core stage contract with Boeing. In June 2019, we found that NASA’s approach to incentivizing Boeing for the SLS stages and Lockheed Martin for the Orion crew spacecraft have not always achieved overall desired program outcomes. NASA paid over $200 million in award fees from 2014-2018 related to contractor performance on the SLS stages and Orion spacecraft contracts, but the programs continue to fall behind schedule and incur cost overruns. For example, in its December 2018 award fee letter to Boeing in which the contractor earned over $17 million in award fees, NASA’s fee determination official noted that the significant schedule delays on this contract have caused NASA to restructure the flight manifest for SLS. For the Lockheed Martin Orion contract, the contractor earned over $29 million for the award fee period ending April 2017. NASA noted that Lockheed Martin was not able to maintain its schedule for the crew service module and that the contractor’s schedule performance had decreased significantly over the previous year. In June 2019, we reported that our past work shows that when incentive contracts are properly structured, the contractor has profit motive to keep costs low, deliver a product on time, and make decisions that help ensure the quality of the product. Our prior work also shows, however, that incentives are not always effective tools for achieving desired acquisition outcomes. We have found that, in some cases, there are significant disconnects between contractor performance for which the contractor was awarded the majority of award fees possible without achieving desired program results. Additionally, we have found that some agencies did not have methods, data, or performance measures to evaluate the effectiveness of award fees. As part of our June 2019 report, we recommended that NASA direct the SLS and Orion programs to reevaluate their strategies for incentivizing contractors and determine whether they could more effectively incentivize contractors to achieve the outcomes intended as part of ongoing and planned contract negotiations. NASA agreed with the intent of this recommendation and stated that the SLS and Orion program offices reevaluate their strategies for incentivizing contract performance as part of contracting activities including contract restructures, contract baseline adjustments, and new contract actions. We will continue to follow-up on the actions the agency is taking to address this recommendation after its ongoing contract negotiations are complete. Minimize Risky Programmatic Decisions to Better Position Programs for Successful Execution. Through our reviews of NASA’s human spaceflight programs, we have found that NASA leadership has approved programmatic decisions that compound technical challenges. These decisions include approving cost and schedule baselines that do not follow best practices, establishing insufficient cost and schedule reserves, and operating under aggressive schedules. As a result, these programs have been at risk of cost growth and schedule delays since NASA approved their baselines. In July 2015, we found that NASA generally followed best practices in preparing the SLS cost and schedule baseline estimates for the limited portion of the program life cycle covered through launch readiness for the first test flight of SLS. However, we could not deem the cost estimate fully reliable because it did not fully meet the credibility best practice. While an independent NASA office reviewed the cost estimate developed by the program and as a result the program made some adjustments, officials did not commission the development of a separate independent cost estimate to compare to the program cost estimate to identify areas of discrepancy or difference. In addition, the program did not cross-check its cost estimate using an alternative methodology. The purpose of developing a separate independent cost estimate and cross-checking the estimate is to test the program’s cost estimate for reasonableness and, ultimately, to validate the cost estimate. In July 2016, we found that the Orion program’s cost and schedule estimates were not reliable based on best practices for producing high-quality estimates. For example, the cost estimate lacked necessary support and the schedule estimate did not include the level of detail required for high-quality estimates. Therefore, we recommended that NASA perform an updated joint cost and schedule confidence level analysis including updating cost and schedule estimates in adherence with cost and schedule estimating best practices, which we have identified as among those recommendations that warrant priority. NASA officials have stated that they have no plans to implement our recommendation. In commenting on the July 2016 report, NASA stated that the agency reviewed, in detail, the Orion integrated cost/schedule and risk analysis methodology and determined the rigor to be a sufficient basis for the agency commitments. However, without sound cost and schedule estimates, decision makers do not have a clear understanding of the cost and schedule risk inherent in the program or important information needed to make programmatic decisions. We continue to believe that NASA should fully implement our recommendation. In our 2017 High-Risk Report, we highlighted concerns that all three programs—SLS, Orion, and EGS—were operating with limited cost reserves, limiting each program’s ability to address risks and unforeseen technical challenges. For example, we found in July 2016 that the Orion program was planning to maintain low levels of cost reserves until later in the program. The lack of cost reserves at that time had caused the program to defer work to address technical issues to stay within budget. Also in our 2017 High-Risk Report, we highlighted concerns regarding each program managing to an aggressive internal NASA launch readiness date. This approach creates an environment for programs to make decisions based on reduced knowledge to meet a date that is not realistic. For example, the EGS program had consolidated future schedule activities to prepare the Mobile Launcher—the vehicle used to bring SLS to the launch pad—to meet its internal goal. The program acknowledged that consolidating activities—which included conducting verification and validation concurrent with installation activities—increased risk because of uncertainties about how systems not yet installed may affect the systems already installed. Officials added, however, that this concurrency is necessary to meet the internal schedule. Subsequently, as discussed above, NASA delayed its committed launch readiness date. Improve Transparency into Costs for Long-term Plans. As we previously reported, a key best practice for development efforts is that requirements need to be matched to resources (for example, time, money, and people) at program start. In the past, we have found that NASA programs, including the Constellation Program, did not have sufficient funding to match demanding requirements. Funding gaps can cause programs to delay or delete important activities and thereby increase risks. In addition, since May 2014, we have found there has been a lack of transparency into the long-term costs of these human spaceflight programs. As discussed above, the EGS and SLS programs do not have a cost and schedule baseline that covers activities beyond the first planned flight. In addition, as previously noted, the Orion program does not have a baseline beyond the second planned flight. As a result, NASA is now committing to spend billions of taxpayer dollars for missions that do not have a cost and schedule baseline against which to assess progress. To that end, we have made recommendations in the past on the need for NASA to baseline these programs’ costs for capabilities beyond the first mission; however, a significant amount of time has passed without NASA taking steps to fully implement these recommendations. Specifically, among those recommendations that we have identified as warranting priority attention, in May 2014, we recommended that, to provide Congress with the necessary insight into program affordability, ensure its ability to effectively monitor total program costs and execution, and to facilitate investment decisions, NASA should: Establish a separate cost and schedule baseline for work required to support the SLS for the second mission and report this information to the Congress through NASA’s annual budget submission. If NASA decides to fly the SLS configuration used in the second mission beyond that mission, we recommended that it establish separate life cycle cost and schedule baseline estimates for those efforts, to include funding for operations and sustainment, and report this information annually to Congress via the agency’s budget submission. Establish separate cost and schedule baselines for each additional capability that encompass all life cycle costs, to include operations and sustainment. This is important because NASA intends to use the increased capabilities of the SLS, Orion, and EGS well into the future. As part of the latter recommendation, we stated that, when NASA could not fully specify costs due to lack of well-defined missions or flight manifests, the agency instead should forecast a cost estimate range— including life cycle costs—having minimum and maximum boundaries and report these baselines or ranges annually to Congress via the agency’s budget submission. In its comments on our 2014 report, NASA partially concurred with these two recommendations, noting that much of what it had already done or expected to do would address them. For example, the agency stated that establishing the three programs as separate efforts with individual cost and schedule commitments met the intent of our recommendation. NASA also stated that its plans to track and report development, operations, and sustainment costs in its budget to Congress as the capabilities evolved would also meet the intent of the recommendation. In our response, we stated that while NASA’s prior establishment of three separate programs lends some insight into expected costs and schedule at the broader program level, it does not meet the intent of the two recommendations because cost and schedule identified at that level is unlikely to provide the detail necessary to monitor the progress of each block against a baseline. Further, we stated that reporting the costs via the budget process alone will not provide information about potential costs over the long term because budget requests neither offer all the same information as life-cycle cost estimates nor serve the same purpose. Life-cycle cost estimates establish a full accounting of all program costs for planning, procurement, operations and maintenance, and disposal and provide a long-term means to measure progress over a program’s life span. We continue to believe that NASA should fully implement these recommendations. As NASA considers these lessons, it is important that the programs place a high priority on quality, for example, holding suppliers accountable to deliver high-quality parts for their products through such activities as regular supplier audits and performance evaluations of quality and delivery. As we found in June 2019, both the SLS and Orion programs have struggled at times with the quality of parts and components. For example, the Orion contractor has had a number of issues with subcontractor-supplied avionics system components failing during testing that have required time to address. NASA has highlighted concerns over the contractor’s ability to manage its subcontractors and the resulting significant cost, schedule, and technical risk impacts to the program. And the SLS program faced setbacks after its contractor did not verify the processes that its vendors were using to clean the fuel lines, resulting in delays to resolve residue and debris issues. Chairwoman Horn, Ranking Member Babin, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any question that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Cristina T. Chaplain, Director, Contracting and National Security Acquisitions at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Molly Traci, Assistant Director; John Warren; Sylvia Schatz; Ryan Stott; and Chad Johnson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit. These three programs include a launch vehicle, a crew capsule, and the associated ground systems at Kennedy Space Center. All three programs are working towards a launch readiness date of June 2020 for the first mission. NASA then plans for these systems to support future human space exploration goals, which include seeking to land two astronauts on the lunar surface. GAO has a body of work highlighting concerns over NASA's management and oversight of these programs. This statement discusses (1) the cost and schedule status of NASA's human spaceflight programs and (2) lessons that NASA can apply to improve its management of its human spaceflight programs. This statement is based on eight reports issued from 2014 to 2019 and selected updates as of September 2019. For the updates, GAO analyzed recent program status reports on program progress. What GAO Found The National Aeronautics and Space Administration's (NASA) three related human spaceflight programs are in the integration and test phase of development, a phase of the acquisition process that often reveals unforeseen challenges leading to cost growth and schedule delays. Since GAO last reported on the status of these programs in June 2019, each program has made progress. For example, the Orion program conducted a key test to demonstrate the ability to abort a mission should a life-threatening failure occur during launch. As GAO found in June 2019, however, the programs continue to face significant schedule delays. In November 2018, within one year of announcing an up to 19-month delay for the three programs—the Space Launch System (SLS) vehicle, the Orion crew spacecraft, and Exploration Ground Systems (EGS)—NASA senior leaders acknowledged the revised launch date of June 2020 is unlikely. In addition, any issues uncovered during integration and testing may push the date as late as June 2021. Moreover, GAO found that NASA's calculations of cost growth for the SLS program is understated by more than 750 million dollars. GAO's past work has identified a number of lessons that NASA can apply to improve its management of its human spaceflight programs. For example, NASA should enhance contract management and oversight to improve program outcomes. NASA's past approach in this area has left it ill-positioned to identify early warning signs of impending schedule delays and cost growth or reap the benefits of competition. In addition, NASA's approach to incentivizing contractors through contract award fees did not result in desired outcomes for the SLS and Orion programs. Further, NASA should minimize risky programmatic decisions to better position programs for successful execution. This includes providing sufficient cost and schedule reserves to, among other things, address unforseen risk. Finally, realistic cost estimates and assessments of technical risk are particularly important at the start of an acquisition program. But NASA has historically provided little insight into the future cost of these human spaceflight programs, limiting the information useful to decision makers. What GAO Recommends GAO has made 19 recommendations in these eight prior reports to strengthen NASA's acquisition management of SLS, Orion, and EGS. NASA generally agreed with GAO's recommendations, and has implemented seven recommendations. Further action is needed to fully implement the remaining recommendations.
gao_GAO-20-424
gao_GAO-20-424_0
Background Rogue River-Siskiyou National Forest The Rogue River-Siskiyou National Forest, located mainly in southwestern Oregon and extending into northern California, encompasses nearly 1.8 million acres. The west side of the forest lies within the Klamath-Siskiyou ecoregion, which is known for its ecological diversity, with 28 coniferous tree species and numerous rare and endemic plants. The forest also contains diverse topography, with steep terrain and rugged geological features across several mountain ranges, including the Klamath Mountains, Siskiyou Mountains, Cascade Range, and Coast Range. Access to the forest is limited, due to many roadless areas and over 340,000 acres of wilderness, including the 180,000-acre Kalmiopsis Wilderness, where the Chetco Bar Fire began. Cities and communities in Oregon near the fire include Brookings and Gold Beach—along the coast of the Pacific Ocean—as well as Agness, Cave Junction, and Selma in Curry and Josephine counties. Figure 1 shows the final perimeter of the fire in southwest Oregon. The part of southwestern Oregon where the Rogue River-Siskiyou National Forest is located is a fire-adapted ecosystem, meaning that most native species and plant communities have evolved with fire, and many are adapted to or dependent on periodic wildfires. The historic fire interval in the area where the Chetco Bar Fire occurred varied, as did the historic severity of fires, according to a Forest Service ecologist. The forest experienced a number of fires over the 30 years before the Chetco Bar Fire occurred. In 1987, the Silver Fire burned nearly 100,000 acres. Fifteen years later, in 2002, the Biscuit Fire burned nearly 500,000 acres, including areas previously burned by the Silver Fire. The Chetco Bar Fire started in the areas burned by both the Silver and Biscuit Fires. In 2018, the year after the Chetco Bar Fire, the forest experienced another large fire, the Klondike Fire, which burned about 175,000 acres, abutting the burn scar of the Chetco Bar Fire in some places. Frequency and Risk of Wildfires in the Western United States The occurrence of large fires in the western United States has been increasing, while, at the same time, fire seasons have been increasing in length, according to recent assessments. Some of these assessments have found that these increases are due in part to climate change, which has contributed to increasing temperatures and droughts in the West, as well as a later onset of fire-season-ending rains. We have previously found that the cost of disasters, including wildfires, is projected to increase as extreme weather events such as droughts become more frequent and intense due to climate change. Moreover, land use practices have increased the risk that severe and intense wildfires will affect people and communities. As we have previously described, land use practices over the past century have reduced forest and rangeland ecosystems’ resilience to fire. In particular, fire suppression—with 95 percent or more of fires suppressed for nearly a century—and timber harvesting and reforestation have contributed to abnormally dense accumulations of vegetation, and these accumulations can fuel uncharacteristically large or severe fires. In some parts of southwestern Oregon, significant vegetation has built up, according to Forest Service and other documents. As a result, southwestern Oregon, as well as other parts of the country, is under high to very high risk from fire, according to a risk assessment and Forest Service presentation. At the same time, development in and around wildland areas continues to increase, placing more people, businesses, and infrastructure at risk of being affected by fires. Fighting Wildfires in the United States Because a single firefighting entity may not be able to handle all wildfires in its jurisdiction, agencies in the United States use an interagency incident management system that depends on the close cooperation and coordination of federal, state, tribal, and local fire protection agencies. The Forest Service is the predominant federal firefighting agency in terms of funding. Other federal firefighting agencies include the Bureau of Indian Affairs, BLM, Fish and Wildlife Service, and National Park Service. Federal and nonfederal firefighting entities generally share their firefighting personnel, equipment, and supplies and work together to fight fires, regardless of who has jurisdiction over the burning lands. Agreements between cooperating entities govern these firefighting efforts and contain general provisions for sharing firefighting assets and costs. On a large wildfire, firefighting efforts generally fall into two phases—initial attack and extended attack. The initial attack phase consists of the efforts to control a fire during the first “operational period” after the fire is reported, generally 24 hours. While the majority of fires on Forest Service land are controlled and suppressed during initial attack, some fires require further firefighting efforts. Such additional efforts are referred to as extended attack. The Forest Service and its interagency cooperators use an incident management system designed to provide appropriate leadership of firefighting efforts. There are five types of incidents, ranging in complexity from type 5 (least complex) to type 1 (most complex). The fire’s complexity determines the type of incident commander and management team assigned. For example, for a type-5 incident, the incident commander may be a local employee qualified to direct initial attack efforts on a small fire with two to six local firefighters. In contrast, for a type-1 incident, the incident commander is one member of a highly qualified incident management team, often with more than 500 firefighters and other personnel. There are sixteen interagency type-1 incident management teams that operate nationwide and are typically deployed to fires for 14-day assignments. In addition, the Forest Service has four type-1 incident management teams under its National Incident Management Organization (NIMO). The Forest Service calls these “short” teams; each team has seven full-time members, but they can add additional members as needed. NIMO teams generally handle complex fires, including long-duration fires, so as not to tie up critical firefighting personnel over a long time. A single incident management team, under the direction of the agency administrator (the line officer, such as the forest supervisor or district ranger, responsible for management of the incident), is typically in charge of a fire, but the incident management system may be expanded into a unified command structure when multiple jurisdictions are involved. This structure brings together incident commanders from the relevant jurisdictions to facilitate a coordinated and integrated interagency response. In such cases, members of the unified command work together to develop a common set of incident objectives and strategies, maximize the use of firefighting assets, and enhance the individual jurisdictions’ efficiency. Once assigned to a fire, an incident management team works with local line officers and fire management staff to determine the strategy and tactics to use in managing the fire. The strategy is the overall plan designed to control the fire; for example, to protect structures and contain the fire within a certain geographic area. Tactics are actions taken to accomplish the objectives set out in the strategy. For example, the fire may be attacked directly, with firefighters working at the fire’s edge to extinguish it. If direct attack is not possible, practical, or safe—because the fire is burning too intensely or on very steep slopes, for example— firefighters may choose to attack it indirectly. In such cases, firefighters typically select an area away from the fire and construct a “fireline,” where vegetation is cleared in an effort to stop the fire’s spread at that point or slow it sufficiently to allow firefighters to attack directly. Firefighters often incorporate geographic features such as roads, rocky areas, ridgelines, and rivers into firelines to increase their effectiveness. In some cases firefighters conduct burnout operations, in which they intentionally set fire to fuels between a fireline and the main fire perimeter to slow or contain a rapidly spreading fire by depriving it of fuel. In carrying out strategies and tactics, firefighters use a variety of firefighting assets, both on the ground and in the air. Ground-based assets include firefighting crews, wildland fire engines, and machinery such as bulldozers, which firefighters use to help construct firelines. When providing personnel to fight fires, the Forest Service and other federal agencies generally rely on a “militia” strategy whereby personnel within each agency are trained to serve in firefighting roles when needed, in addition to performing their day-to-day work responsibilities. Air-based assets include helicopters and fixed-wing air tankers. Helicopters generally drop water directly on a fire, whereas air tankers generally drop fire retardant ahead of the fire, often near a fireline that has been constructed, to slow a fire’s spread. Air tankers range in size from small single-engine air tankers, which are maneuverable but carry only small amounts of retardant, to large aircraft such as converted DC-10s or Boeing 747s—referred to as “very large air tankers”—which can carry substantial amounts of retardant but whose use can be limited in mountainous terrain because of their size. The level of risk that decision makers and firefighters are willing to accept in any given situation depends on the experience and training of those involved. Overall, agency firefighting doctrine emphasizes safety above all other concerns; Forest Service policy, for example, states, “In conducting wildland fire suppression, responsible officials shall give first priority to the safety of firefighters, other personnel, and the public.” Firefighters and other personnel who respond to wildland fire incidents are required to complete training to help them identify risks as well as develop appropriate strategies and tactics to respond to different situations. Key Events of the Chetco Bar Fire and Forest Service’s Response Included an Unsuccessful Initial Firefighting Attack and Rapid Spread of the Fire by Strong Winds The Chetco Bar Fire grew slowly in the summer of 2017 before undergoing a period of rapid growth driven by strong, hot winds. In response, the Forest Service and other agencies undertook various firefighting strategies and tactics over different phases of the fire, described below. Figure 2 provides a timeline of the fire’s key events. Initial Firefighting Attack in Remote, Steep Terrain Was Not Successful (July 12-13, 2017) In the initial phase (July 12-13, 2017), the Chetco Bar fire was relatively small and inaccessible. When the fire was first detected on July 12, it was estimated to be between one quarter and one half acre in size, burning in remote, steep terrain in the Kalmiopsis Wilderness in the Rogue River- Siskiyou National Forest. The fire’s initial location was several miles from the closest road access point. No properties or other “values at risk” (such as structures, other property, and natural and cultural resources that could be damaged by a wildfire) were in the immediate vicinity of the fire, according to Forest Service documents and officials. The Forest Service was notified of the Chetco Bar Fire at 2:43 p.m. on July 12 and, at 4:14 p.m., four Forest Service firefighters rappelled from a helicopter to assess the fire. The rappellers landed on a ridge above the fire to create a helispot (a temporary helicopter landing area) so that additional firefighters and equipment could more easily be brought to the fire. The rappellers requested and received permission from the district ranger for chainsaw use in the Kalmiopsis Wilderness to prepare the helispot, and they worked on cutting trees and clearing brush until late that evening, according to Forest Service documents and national forest officials. The rappellers estimated that the helispot was 60 percent cleared by the end of the first day, according to national forest officials. While the rappellers were working, the Forest Service helicopter returned to its base near Grants Pass, Oregon, to attach a bucket to drop water onto the fire. In the meantime, two helicopters from the Oregon Department of Forestry headed to the fire. The three helicopters dropped about 17,000 gallons of water the first day, according to Forest Service documents. Forest Service officials said these water drops were intended to slow the spread of the fire while the rappellers worked to clear the helispot. Anticipating that the helispot would be completed shortly, the Forest Service ordered two 20-person crews to assist in firefighting efforts the next day. As the rappellers set up camp for the night, incident command radioed them to say that the fire appeared to be holding at about three quarters of an acre. The next morning, July 13, the Forest Service brought in four additional rappellers to continue working on the helispot throughout the morning and into the afternoon (see fig. 3). One of the rappellers walked the perimeter of the fire and determined that the fire had grown to about 10 acres overnight. While the rappellers were working, two helicopters dropped about 18,000 gallons of water that day and a single engine air tanker dropped 1,200 gallons, according to a Forest Service document. The crew bosses for the two crews that had been ordered the previous day flew over the fire early afternoon of July 13, according to Forest Service documents. They estimated the fire had grown to about 15 acres and observed a number of spot fires (smaller fires separate from the main fire) caused by burning material rolling down the hill. They expressed safety concerns about bringing crews into that area and also determined the helispot needed more work before a helicopter could land safely. Since the crews would need to be shuttled in by helicopter, the crew bosses decided not to bring in the requested crews, according to officials. Later that day, the incident commander requested a helicopter to remove the eight rappellers from the fire because of safety concerns and a low probability of success at containing the fire, according to the incident commander and Forest Service documents. The rappellers said that it was taking much longer to complete the helispot than initially anticipated and they did not have a good safety zone or escape route. They also noted that there was unburned vegetation on the slope between the fire and the helispot they were constructing—a dangerous situation if the fire started to spread quickly. The rappellers were removed by 5:00 p.m., at which time the helicopters also stopped dropping water. Figure 4 shows the ignition point of the Chetco Bar Fire and the fire’s growth as of July 13, 2017. Fire Grew Slowly over Several Weeks as Firefighters Pursued Indirect Strategies (July 14-August 16, 2017) In the second phase of the fire, Rogue River-Siskiyou National Forest officials assigned a type-3 incident management team to manage the response to the Chetco Bar Fire, following the unsuccessful initial attack. Forest Service documents indicated that fire behavior was moderate over the next several weeks, averaging around 150 acres of growth per day. The Chetco Bar Fire was a relatively low-priority fire during this phase, since it was far from values at risk and it remained within the Kalmiopsis Wilderness, while other fires in the region were threatening communities and resources, according to Forest Service documents and incident management team officials. Because firefighters had been unable to suppress the fire during initial attack, national forest officials said they anticipated, based on knowledge of previous fires in the area, that the Chetco Bar Fire would become a long-term incident. The type-3 incident management team completed a long-term assessment and began working to contain the fire using long- term, indirect strategies. Under the type-3 team, crews scouted potential locations to fight the fire and started building firelines some distance away, approximately 6 miles from the fire and outside of the wilderness boundary, according to a Forest Service document and an incident management team official. Several additional fire crews were assigned to work on the fire during this time, with staffing fluctuating between approximately 40 and 140 people per day. As the type-3 team’s 2-week rotation was ending, national forest officials decided to bring in a NIMO team to assume command of the fire. Officials said they brought in a NIMO team because it consisted of type-1-qualified staff who could be staffed on the fire for longer than 2 weeks, and the team could expand or contract as needed. The NIMO team took command of the fire on July 29, with the fire estimated at 2,181 acres in size, and started updating the type-3 team’s long-term assessment and developing a long-term implementation plan. The plan identified 13 trigger points, referred to as “management action points,” to help guide decision-making on protecting high values at risk if certain conditions were met. For example, the plan laid out actions to prevent the fire from crossing the Chetco River—the first trigger point identified—and actions to be taken if the fire crossed the river. The NIMO team continued the type-3 team’s efforts to construct a series of firelines away from the main fire and, according to a team summary document, completed all of the firelines by August 17. Forest Service officials told us that for these firelines to be effective, firefighters would have needed to burn the vegetation between the lines and the fire itself (known as a burnout). National forest and NIMO team officials said that the teams had not yet taken this step because they considered it an unnecessary risk as long as the fire remained north of the Chetco River. These officials said that burnout operations pose risks if the fire set by firefighters burns in a different direction than intended, and such operations can unnecessarily burn a larger area of the forest if the fire does not reach the burnout. Therefore, one national forest official said firefighters will prepare firelines but not conduct burnout operations until the incident management team determines they are needed—particularly since safety risks can be associated with conducting burnout operations. Figure 5 shows the Chetco Bar Fire’s growth from July 14 through August 16, 2017. Fire Expanded Rapidly because of Strong Winds, and Firefighting Response Began to Escalate (August 17-August 21, 2017) As the fire burned into August, hotter and drier weather created conditions for more active fire behavior in the third phase of the fire. Chetco Effect winds developed in mid-August 2017, causing the Chetco Bar Fire to rapidly expand and intensify (see sidebar). The Forest Service was aware of the potential for such winds, as fire behavior modeling and the July 2017 long-term assessment showed the potential for these winds to increase fire behavior dramatically by mid-August. The winds, combined with dry fuels and heavy vegetation, created conditions that led to extreme fire behavior. Chetco Effect Winds Chetco Effect winds, also known as Brookings Effect winds, are warm, dry, and strong winds flowing down the Chetco River Basin toward Brookings, Oregon (see figure below). Such winds are more broadly referred to as Foehn or downslope winds, other examples being the Santa Ana winds in southern California and the Diablo winds in northern California. Chetco Effect winds can happen any time and generally occur two to four times a year, according to the National Weather Service. The Chetco Effect winds first occurred the evening of August 15 and morning of August 16, but the fire remained north of the Chetco River. When the winds returned the evening of August 16 and morning of August 17, the fire crossed the river and began expanding rapidly, in part because heavy vegetation on the south side of the river fueled the fire under the winds. Many officials and stakeholders said nothing could be done to moderate the fire’s behavior when the Chetco Effect winds were in effect. The fire increased in size from 8,500 acres on August 17 to 91,551 acres on August 21 (see fig. 6). As a result, the Chetco Bar Fire became a much higher priority fire, according to Forest Service documents. The NIMO team ordered additional crews on August 17, in anticipation of conducting burnout operations along 10 miles of fireline in an attempt to slow the fire, according to Forest Service documents. However, the Chetco Effect winds caused the fire to move rapidly toward and past the fireline before the Forest Service could conduct the planned burnouts. Even though the fireline was completed prior to being overrun by the fire, national forest officials told us that the weather conditions were not favorable for burnout operations, as the winds would have blown the burnout fires back toward private timberlands and populated areas. The winds also caused embers to fly far ahead of the fire during this time, creating spot fires 1 to 2 miles or more ahead of the main flame front. On August 18, the Chetco Bar Fire began spreading from national forest onto private timberlands and unincorporated areas containing homes. As the fire began to threaten homes and other structures, the NIMO team directed firefighters to take appropriate action to try to protect those structures, if fire behavior allowed. For example, between August 18 and 21, Forest Service documents indicated that firefighters cleared brush around several structures and homes in a small community known as Wilderness Retreat and along two Forest Service roads. On August 19, the fire burned rapidly toward Wilderness Retreat and firefighters conducted an emergency burnout, which successfully protected the community, according to a NIMO team document and national forest officials. Around this time in another area, the Chetco Bar Fire burned six primary residences and more than 20 other structures, according to state and Forest Service documents. On August 20, the fire traveled 6 miles toward Brookings in a single day, and threatened more than 3,000 homes during this phase. As the Chetco Bar Fire burned toward Brookings, the NIMO team notified the Curry County Sheriff that residents would need to be evacuated. However, the rapid spread of the fire provided limited time to notify residents and conduct evacuations, according to a NIMO team document and national forest officials. The Curry County Sheriff’s Office issued the first evacuation notices on August 18, and additional evacuation notices were issued between August 19 and 21. As the fire expanded, the NIMO team ordered additional firefighting assets, increasing the ground assets assigned from 65 firefighters and 1 fire engine on August 17, 2017, to 788 firefighters and 90 fire engines by August 21. However, some assets ordered were not available because they were assigned to other fires in the region. In addition to ground assets, additional aircraft were ordered and assigned to assist the firefighting effort—such as two large and one very large air tankers, which dropped retardant on the fire on August 17 and August 18. The incident management team had requested two additional air tankers, but the requests were cancelled since aircraft were unavailable, according to a Forest Service document. Some ordered drops from air tankers also were cancelled because of poor visibility from smoke. Six helicopters were ordered during this phase, four of which were assigned to the fire, but the helicopters also were unable to fly due to smoke, according to flight communication logs and an incident management senior official. With the Chetco Bar Fire’s rapid growth, national forest officials decided to order a type-1 incident management team on August 21. Since mobilizing the team would take time, a type-2 team already in the vicinity was brought in to assist the NIMO team on August 19. The type-1 team arrived on August 23 and assumed command on August 26, according to a team document. Firefighting Response Continued to Escalate and Fire Burned Actively but Rate of Spread Slowed (August 22-September 22, 2017) In the fourth phase, the Chetco Bar Fire continued to burn actively through the end of August and into September 2017, but the rate of its spread generally slowed. However, high temperatures and low humidity contributed to the fire growing from 97,758 acres on August 22 to 191,067 acres on September 22 (see fig. 7). Evacuations continued in the early part of this phase, threatening more than 8,500 homes during parts of September, but evacuation orders began to be lifted as the risk to homes declined. During this phase, the Forest Service ordered more firefighting assets, resulting in over 1,700 firefighters in total assigned to the fire. Between September 6 and 19, the fire began expanding to the east and the fire was divided into an east and west zone, with separate incident management teams assigned to each zone. Firefighters constructed firelines to the south and west of the fire. Forest Service documents indicated the agency put in 128 miles of fireline cut by bulldozers and 52 miles of hand cut fireline, and used 141 miles of existing roads and 25 miles of natural features as firelines. Air tankers and helicopters continued supporting firefighters, dropping over 950,000 gallons of water, 55,000 gallons of retardant, and 10,000 gallons of gel during this phase, according to Forest Service documents. However, smoke from the fire hampered air operations, with one type-1 team reporting it was unable to conduct air operations for about half of the days it was in command (August 26 through September 9). Firefighters gained substantial control of the fire during this phase, going from 0 percent containment on August 22 to 97 percent containment by September 22. Fire Intensity Moderated because of Changing Weather, and Fire Was Ultimately Contained (September 23-November 2, 2017) In mid- to late-September, the weather started to change, with cooler days and more moisture, which helped to moderate the fire’s behavior. By September 23, the area had received several inches of rain, which nearly contained the fire, according to an incident management team document. Firefighting assets were released as the fire was contained. The Chetco Bar Fire was declared fully contained on November 2—nearly 4 months after it was detected. The fire burned a total of approximately 191,197 acres, according to the Forest Service’s Burned Area Emergency Response (BAER) report (see fig. 8). Officials and Stakeholders Raised Concerns about the Response to the Chetco Bar Fire, Such as the Aggressiveness of Firefighting and Extent of Communication Forest Service officials and stakeholders we interviewed raised a number of concerns about the Forest Service’s response to the Chetco Bar Fire. Many of these concerns related directly to the Forest Service’s response to the fire; some related to broader agency programs that may have had an effect on fire behavior. We grouped these concerns into five categories: (1) aggressiveness of firefighting response, (2) availability of firefighting assets, (3) communication with cooperators, (4) communication with the public, and (5) timber harvest and other fuel reduction activities. The Forest Service has taken steps that may help address some of the concerns, such as those related to communication. Agency officials and stakeholders expressed differing views about some of the concerns and whether changes were necessary. Aggressiveness of Firefighting Response Some national forest officials and many stakeholders we interviewed said that the Forest Service was not aggressive enough in fighting the Chetco Bar Fire before the Chetco Effect winds arrived in mid-August. Several of these stakeholders said if the Forest Service had used more aggressive firefighting strategies and tactics, the agency could have prevented the fire from getting as large as it did and threatening homes. Some of these officials and stakeholders raised concerns about whether incident management teams and line officers appropriately balanced the risks of different firefighting decisions during the fire. Some said the strategies and tactics taken early on may have put hundreds of firefighters and the public at risk later in the fire. National forest and incident management team officials said that in attempting to suppress the Chetco Bar Fire, they adopted firefighting strategies and tactics that considered firefighter safety, the values at risk, and the probability of success. National forest officials said that when deciding how to respond to the fire, they prioritized firefighter safety and also considered the likelihood that a particular response would be successful, in accordance with 2017 Forest Service guidance. As previously discussed, in the early stages of the Chetco Bar Fire, firefighters expressed concerns about their safety and the likelihood of success of certain tactics. In addition, national forest officials noted that after the rappellers asked to be pulled out of the fire and other firefighters expressed safety concerns, line officers were hesitant to send in additional firefighters. Other officials and stakeholders said the area where the Chetco Bar Fire started is very dangerous, with some noting that it is one of the most dangerous areas in the region and possibly the country to fight fire. Specific concerns about the aggressiveness of the Forest Service’s response included the following: Number of firefighters. Some officials and several stakeholders raised concerns about the Forest Service not sending in more firefighters at the beginning of the Chetco Bar Fire to try to contain it before it threatened homes. In response, national forest officials said that the four rappellers that were sent on the first day were part of an 18-person crew stationed near Grants Pass, Oregon. They were the only crew members available to respond on July 12, as the remaining crew members had just returned from another fire assignment, and firefighters are generally required to take 2 days off after completing a standard 14-day fire assignment. As previously noted, safety concerns also factored into decisions to remove the rappellers and not add crews on the second day of the fire. Absence of smokejumpers. Some stakeholders raised concerns that the Forest Service did not send smokejumpers into the Chetco Bar Fire in its early stages, saying that smokejumpers may have been more effective at suppressing the fire when it was small. In response, national forest officials said that the rappellers who were sent to the fire were located much closer to the ignition point than the closest smokejumpers and were able to respond more quickly. These officials also said that rappellers can be more effective in rough terrain with heavy timber, since they do not need an open space to land with parachutes and can be dropped closer to the fire. Use of helicopters. Several stakeholders raised concerns about the Forest Service stopping the use of helicopters to drop water on the fire after the rappellers were removed. According to interagency guidance and Forest Service officials, water drops are not as effective at containing a fire without crews on the ground (to build firelines, for example), and they did not want to expose helicopter crews to unnecessary risk for actions that were unlikely to be effective. In addition, officials said that the water drops were causing burning logs and other debris to roll down the hill and create spot fires. Interagency guidance discusses the importance of coordinating air and ground firefighting tactics, noting that the effectiveness of aircraft is dependent on the deployment of ground assets. Use of indirect strategies. Several stakeholders raised concerns about incident management teams not engaging the fire more directly in the first several weeks rather than constructing fireline miles away. Some of these stakeholders described this indirect approach as a “watch and wait” or “let it burn” approach. In response, officials said that they looked for locations and opportunities to fight the fire directly, but the fire’s remote location and rugged terrain made this difficult. One official estimated it would have taken firefighters 2 days to hike to the fire because of the distance and trail conditions. Number of burnout operations. Several officials raised concerns about the Forest Service not conducting burnout operations before the Chetco Effect winds arrived in mid-August. However, as previously noted, officials stated that there are risks in conducting such operations. Limited use of chainsaws. Some national forest officials raised concerns about limited use of chainsaws in the Kalmiopsis Wilderness, saying this prevented them from making quicker progress in constructing fireline. For example, two national forest fire management officials said that in trying to clear a wilderness trail to use as a fireline, the crew used handsaws rather than chainsaws after the initial attack, which made the task more difficult and time consuming. Limited action to protect homes. Several stakeholders raised concerns about incident management teams not doing more to protect homes, stating that firefighters and equipment in the vicinity of homes that later burned were not used to help protect those homes. In response, national forest and headquarters officials said that although the agency tries to prevent fires from reaching homes, protecting homes and other private structures is the responsibility of state and local entities. Moreover, headquarters officials noted that Forest Service firefighters are not trained or equipped to defend structures. Forest Service officials said that since the Chetco Bar Fire, the agency has expanded tools that may help address some of these concerns for future fires. They noted that some of these tools were not widely available at the time of the Chetco Bar Fire but are becoming more common. In particular, the Forest Service has an evolving risk management assistance program aimed at improving decision-making on fires by developing a strategic evaluation process. This program includes risk- management assistance teams that can be deployed to fires to assist with key decisions and exercises to help incident management teams and line officers analyze different firefighting options, according to program documents. For example, the Forest Service developed a tradeoff analysis tool through which decision makers assess different firefighting options and rate them according to how well they address firefighter safety, public safety, and values at risk. During the 2018 Klondike Fire, national forest officials said they brought in a risk-management team to facilitate analysis of firefighting options and included cooperators in the discussions. Officials said these discussions helped everyone understand the risks and tradeoffs of various firefighting options, adding transparency to the process. Availability of Firefighting Assets Several officials and stakeholders raised concerns about the number of firefighting assets assigned to the Chetco Bar Fire. According to Forest Service documents and officials, firefighting assets were stretched thin fighting other fires in the region, and there were a number of times throughout the Chetco Bar Fire when assets, such as management teams, crews, and helicopters, were requested but were unavailable (see table 1). For example, an incident management team that was heading to the Chetco Bar Fire was diverted to the Eagle Creek Fire, which was threatening homes and other structures near Portland, Oregon. Further, some officials said limited availability of certain firefighting assets with specific capabilities, such as infrared drones that can “see” through smoke or cloud cover, hindered their ability to fight the fire when visibility was limited. Some officials also emphasized the importance of having more long-term fire analysts assigned to national forests and incident management teams to help develop and interpret fire behavior models and long-term assessments that, in turn, could help protect people and values at risk. However, other officials said that having additional assets likely would not have made a significant difference in the response to the Chetco Bar Fire because of the difficult terrain where the fire started and because of the Chetco Effect winds. Beyond their specific concerns with the Chetco Bar Fire, some stakeholders also observed the Forest Service would likely benefit from having additional firefighting assets in the future, as the frequency and intensity of fires are likely to increase. Forest Service officials acknowledged that there were not enough firefighting assets in 2017, given the number of large fires that year. As a result, they said they had to make difficult decisions regarding prioritizing assets, with fires threatening life and property receiving higher priority. Forest Service officials said that the agency is working to increase the number of some types of firefighting assets. For example, headquarters officials said that the agency was in the process of developing a drone program. In addition, officials said that the agency is working on increasing the availability of some assets, such as air tankers and helicopters, through the use of different contracting authorities. Communication with Cooperators Several officials and stakeholders raised concerns about communication among the various cooperators before and during the Chetco Bar Fire. In particular, some said that differences in firefighting approaches—due in part to cooperators’ differing missions, responsibilities, and priorities—had not been fully clarified in advance, leading some cooperators to express frustration with the Forest Service’s response to the fire. For example, according to some officials and stakeholders, the Oregon Department of Forestry and Coos Forest Protective Association generally place more emphasis on protecting timberlands than the Forest Service, and this sometimes leads to differences in the agencies’ preferred approaches to responding to fires. For example, when determining where to construct a fireline, Forest Service officials may identify a location aimed to keep a fire from reaching homes, whereas cooperators from the Oregon Department of Forestry or Coos Forest Protective Association may prefer a location that also protects timberlands. In addition, some stakeholders said that the frequent rotation of incident management teams—generally about once every 2 weeks—made it difficult for local cooperators to coordinate with those teams. One official noted that rotation of teams can make it difficult to build trust and maintain good communication with cooperators and the public. However, Forest Service headquarters officials said that the agency has studied the structure and use of incident management teams in the past, and the agency has not identified a better approach. Several officials and some stakeholders noted lessons learned from the Chetco Bar Fire. For example, they cited the need to do more pre-season fire planning, such as meeting with cooperators before the fire season begins to discuss coordination among agencies and planning how they might respond to fires in certain situations. Some also noted the need to improve communication and transparency with cooperators during fires, such as through the use of risk-management assistance teams previously discussed. Officials and stakeholders said that communication among cooperators in the region has improved since the Chetco Bar Fire, helping to develop a shared understanding of the potential firefighting response in different locations and under different conditions. Communication with the Public Many officials and several stakeholders said the Forest Service did not provide sufficient or timely information to the public about the danger from the Chetco Bar Fire and what the agency was doing to fight it. In particular, several officials raised concerns about the Forest Service waiting to hold its first public meeting until over a month after the fire was detected. Several officials and some stakeholders said that in the absence of sufficient information, misinformation and rumors—such as incorrect information on evacuations in certain areas—spread, leading to frustration, anger, and fear on the part of the public. Officials and stakeholders said another lesson learned was the importance of communicating accurate and timely information through various means, including public meetings and social media. Officials and stakeholders told us that the Rogue River-Siskiyou National Forest is taking steps to help ensure that it communicates more effectively during fires. For example, national forest officials said that since the Chetco Bar Fire, they have increased their level of communication with local communities. Officials also said they are now more proactive in monitoring social media and ensuring they post correct information on fires, among other things. As a result, officials and stakeholders said that public perception of the 2018 Klondike Fire was much more positive than of the Chetco Bar Fire, even though both fires burned more than 175,000 acres. Timber Harvest and Other Fuel Reduction Activities fueled the Chetco Bar Fire and made firefighting efforts more dangerous by leaving snags (standing dead trees) that could injure or kill firefighters. Following wildfires, the Forest Service may consider whether to leave burned trees and allow the burned area to recover naturally or to harvest some of those trees—called salvage harvesting—with the intention of generating funds to help pay for the recovery of natural resources or infrastructure, such as trails or roads, among other purposes. Considerable scientific uncertainty exists about whether and how quickly harvested areas recover compared with unharvested areas. Disagreement also exists about the extent salvage harvesting generates funding, considering the cost of planning, preparing, and administering sales of salvaged trees. Following the Chetco Bar Fire, the Forest Service determined that 13,626 acres of the burned area were potentially available for salvage harvesting. These areas had 50 to 100 percent tree mortality and were in areas of the Rogue River-Siskiyou National Forest where timber harvesting aligned with existing management objectives, according to an official. The Forest Service narrowed the area that it proposed putting up for salvage harvesting to 4,090 acres, removing areas that lacked economically viable timber, were inaccessible to logging equipment, were in roadless areas, or had sensitive wildlife habitat, among other factors. The total number of acres the Forest Service offered for salvage harvesting was 2,194 acres across 13 sales, according to an official. Of the 13 salvage sales offered, eight were sold, totaling 1,957 acres, and five were not sold. Of these five offers, three did not receive bids, and two were dropped by the Forest Service due to market changes or other considerations. In contrast, several Forest Service officials and some stakeholders said that higher levels of timber harvest and fuel reduction would not have made a large difference in the Chetco Bar Fire because of the fire’s intensity and rate of spread under the Chetco Effect winds. Several said that if there had been more timber harvest, the forest might have been replanted in ways that could have made the fire worse. Specifically, when replanting is done following timber harvest, trees may be planted more densely and uniformly than would occur if vegetation were allowed to grow back naturally, according to a Forest Service ecologist and some stakeholders. In addition, slash (debris from logging operations) is sometimes left on the ground after timber harvest, which can fuel future fires. As a result, areas where timber has been harvested may burn more severely during future fires, according to some officials and stakeholders. Rogue River-Siskiyou National Forest officials said the forest has been carrying out many fuel reduction activities and has exceeded its fuel reduction target every year from fiscal year 2014 through fiscal year 2019 (see appendix I for a map of past timber harvests and other fuel reduction activities). As part of its fuel reduction efforts, the forest is creating some larger breaks in vegetation by connecting areas where fuel reduction activities have taken place, according to officials. Further, national forest officials are maintaining some firelines that were built during previous fires, including the Chetco Bar Fire, to aid in their response to future fires. Agency officials said these efforts are part of a broader effort to move towards spatial fire planning, where areas at risk and effective places to contain wildfires are identified before fires start. Chetco Bar Fire Had Various Effects on Homes and Infrastructure, Public Health, Local Businesses and Workers, and Natural and Cultural Resources Forest Service officials and stakeholders we interviewed and reports and other documents we reviewed identified a variety of effects the Chetco Bar Fire had on local communities and resources. We grouped these effects into four categories: (1) homes and infrastructure, (2) public health, (3) local businesses and workers, and (4) natural and cultural resources. Most of the identified effects were negative, although some positive short- and long-term effects were identified. For example, the Chetco Bar Fire damaged habitat for many wildlife species, but some species that prefer burned landscapes likely benefitted from the fire, according to officials. Effects on Homes and Infrastructure The Chetco Bar Fire destroyed six homes and damaged one home, according to Forest Service and state documents. The fire also threatened over 8,500 homes, causing more than 5,000 residents to be evacuated over the course of the fire, according to Forest Service documents. In addition, Forest Service and state documents stated that the fire destroyed more than 20 other structures and damaged at least eight more, such as garages and other outbuildings. After a severe wildfire, soil erosion can increase and cause adverse effects. As fires burn, they destroy plant material, such as roots and leaves, that help prevent erosion during severe rainstorms. Plant roots help stabilize the soil, and leaves slow runoff by allowing water to seep into the soil. In some severe fires, burning vegetation creates a gas that penetrates the soil. As the soil cools, this gas condenses and forms a waxy coating that causes the soil to repel water. Rainwater and melted snow can then flow across these surfaces and cause erosion. Erosion can reduce water quality and damage roads. In addition, because burned soil does not absorb as much water as unburned soil, seeds have a harder time germinating, and surviving plants find it more difficult to obtain moisture. the 63 miles of trails within the fire perimeter. Further, a campground within the national forest was partially damaged and closed to the public while being repaired. Erosion following the Chetco Bar Fire also washed approximately 40,000 cubic yards of sediment into the Port of Brookings Harbor. A port official said that dredging the harbor is estimated to cost $4 million. The official noted that the commission governing the port was pursuing grants, such as disaster grants from the Federal Emergency Management Agency, to help with dredging costs but was unsure whether total costs could be covered. Local officials said that post-fire erosion could also negatively affect drinking water infrastructure, since the Chetco Bar Fire burned about 80 percent of Brookings’ watershed. Brookings received a grant to evaluate the fire’s effect on the city’s water system, according to a local official. The city hired a consultant, who reported in June 2018 that the quality of the water was generally excellent and that no significant water quality effects from the fire had been observed. Effects on Public Health People with existing lung disease may not be able to breathe as deeply or vigorously as they normally would during exposure to high levels of particulate matter. Healthy people may also experience these effects. susceptibility to respiratory infections and aggravate existing respiratory diseases, such as asthma and chronic bronchitis. smoke (see sidebar). Most healthy individuals recover quickly from smoke exposure and will not experience long-term health effects, according to an Environmental Protection Agency document; however, the smoke exposure effects are more sudden and serious for sensitive groups, including children, older adults, and people with existing heart or lung disease. Local health officials and a national forest official also raised concerns about the potential long-term effects of exposure to wildfire smoke, but little data exist on such effects. The Forest Service reported that four towns in the vicinity of the Chetco Bar Fire experienced, on average, about 9 days of unhealthy or worse air quality, although the severity and duration of wildfire effects on air quality varied by town (see fig. 10). Of these towns, Brookings had the most days—three—measured as “hazardous,” the worst category. The four towns also experienced about 5 days, on average, that were measured as being unhealthy for sensitive groups. Many residents also experienced mental and emotional effects from the Chetco Bar Fire, according to local health officials and some stakeholders. A local health official said that some residents experienced post-traumatic stress disorder after the fire, with some residents becoming hypervigilant of smoke and sirens. Some stakeholders noted that the 2018 Klondike Fire, which burned nearby, led to additional mental and emotional stresses for those affected by the Chetco Bar Fire. Effects on Local Businesses and Workers The Chetco Bar Fire’s effects on local businesses and workers included damage to the tourism and logging industries. Local businesses lost revenue in the short term because of decreased summer tourism during the Chetco Bar Fire, according to some documents and many stakeholders. According to estimates from the Oregon Tourism Commission, businesses—including tourism-dependent ones such as hotels and restaurants—lost over $1 million in both Curry and Jackson counties, and businesses in Josephine County lost over $160,000 during the 2017 fire season. For example, the Oregon Shakespeare Festival canceled nine outdoor performances because of wildfire smoke, resulting in losses estimated at about $600,000, according to a company document. In addition, one vineyard in Cave Junction lost an estimated $10,000 to $20,000 in revenue because of reduced tasting room sales and vacation rentals, according to an Oregon vineyard association spokesperson. The decrease in tourism also had short-term negative effects on workers in the tourism industry. According to a report, workers in Curry County lost income, in part due to employee furloughs, because of wildfires in 2017. Another document cited that Josephine County lost an estimated 100 jobs in 2017 because of the Chetco Bar Fire. Following the fire, the governor of Oregon created the Chetco Bar Fire Recovery Council to help the region recover from the fire. The council assessed economic damage, identified recovery needs, and identified potential state funding for those needs. For example, in November 2017, the council identified a potential need for state economic development funds to assist local businesses. However, the council reported in March 2018 that three businesses affected by the fire had received federal loans from the U.S. Small Business Administration and that there was no longer a clear need for state economic development funds. In addition, some stakeholders we interviewed and documents we reviewed raised concerns that if summer wildfire smoke became common in southern Oregon, it could have a long-term negative effect on tourism. However, a 2019 report found that wildfire smoke had a minimal effect on people’s willingness to consider traveling to southern Oregon in the future. One local business has set up air quality monitors at a tourist attraction to inform tourists of the current air quality. The Chetco Bar Fire burned 14,130 acres of nonfederal timberlands, according to the Forest Service’s BAER report. One privately owned lumber company was particularly hard hit, with the fire burning about 10,000 acres of its timberlands, according to company representatives. This loss was about 10 percent of the company’s timberlands and represented about 5 years of its average harvest. Following the fire, the company salvage-harvested approximately 6,000 acres of the burned timber, which company representatives said provided some short-term economic benefits for the company and, according to one stakeholder, also temporarily increased employment for loggers and truck drivers in the area. However, the long-term effects of the fire on the company are unknown. One representative said, depending on future market conditions, the loss of timber from the Chetco Bar Fire could lead the company to lay off employees or could jeopardize its future. Effects on Natural and Cultural Resources Soil and Vegetation The severity of the Chetco Bar Fire varied across the forest, which led to varied effects on soil and vegetation. As shown in figure 11, within the perimeter of the Chetco Bar Fire, burn severity ranged as follows: unburned or very low (19 percent, or 36,027 acres); low (40 percent, or 76,613 acres); moderate (34 percent, or 64,545 acres); and high (7 percent, or 14,012 acres). The severity with which soil burns during a fire affects both the potential for erosion following the fire and the severity of damage to vegetation. Areas of the Chetco Bar Fire that burned at moderate and high severity had increased potential for erosion, according to the BAER report. As previously discussed, post-fire erosion damaged roads and other infrastructure. Further, the BAER report noted that severely burned areas may have lower soil productivity and vegetation growth. However, most of the native vegetation in the area is adapted to fire and is likely to recover over time, according to the BAER report. Moreover, a Forest Service ecologist said the Chetco Bar Fire helped create a more diverse forest structure (characterized as a mosaic of different species and age classes) that benefits many plant and animal species (see fig. 12). For example, nine sensitive plant species found in the area burned by the Chetco Bar Fire thrive in early post-fire ecosystems, according to a Forest Service document. Further, officials said rapid regrowth of vegetation, such as a moss that thrives after fires, helped reduce erosion and limit potential future damage to roads and trails. Forest Service officials and documents noted that they did not expect widespread, long-term negative effects on vegetation from the Chetco Bar Fire, but they identified two negative effects: Invasive plants. More than a thousand individual invasive plants (such as noxious weeds) were introduced to an approximately 13,000- acre area of the national forest during the Chetco Bar Fire, mainly via firefighters’ boots and equipment. Invasive plants can, in some cases, displace native plants, compromise the quality and quantity of habitat for wildlife and fish, and increase wildfire risk. A national forest official said that it is labor intensive and costly to eradicate invasive plants because they have to be pulled out by hand. The official said the agency does not have the resources to remove all of the invasive plants brought in during the fire and is prioritizing removal of those that are the fastest growing, most disruptive, and affect the most highly valued resources. In addition, the National Forest Foundation administered a $7,000 grant to remove invasive plants on 10 of the affected acres in June and July 2019. Redwood stands. The Rogue River-Siskiyou National Forest contains the northernmost naturally occurring coast redwood tree stands, and the Chetco Bar Fire burned about 12 percent of the total area of redwood stands within the forest, or about 60 acres, according to a Forest Service ecologist. However, most of the area burned at low severity, though parts burned at moderate or high severity. The ecologist said redwoods are adapted to survive fire, noting that larger trees will usually resprout from dormant buds under the bark along the entire length of the trunk (see fig. 13). Smaller trees and larger trees burned at high severity can be killed at the top but are often able to resprout. Wildlife In the short-term, the Chetco Bar Fire killed or damaged habitat for many wildlife species, although the exact effect of the fire on wildlife is unknown, according to a Forest Service official. Most wildlife species are expected to recover, but the effects on some threatened and sensitive species could be longer lasting, according to Forest Service documents and officials. For example, half of the 13 known northern spotted owls—a species that is federally listed as threatened under the Endangered Species Act—living within the perimeter of the fire were estimated to have died from the fire, according to a Forest Service biologist. In addition, this biologist said the fire’s effect on the population of a seabird called the marbled murrelet, as well as on two mammals—Pacific marten and fisher—is unknown, although it negatively affected their habitats. National forest officials said the Chetco Bar Fire also likely benefitted some wildlife species because the mosaic landscape resulting from the fire is preferred by some wildlife, including deer, elk, migratory birds, butterflies, and woodpeckers. For example, black-backed woodpeckers thrive in partly burned areas because they eat wood-boring beetles that feed on recently burned trees. Fish Erosion resulting from the Chetco Bar Fire likely had short-term negative effects on fish populations, including the threatened coho salmon, according to the BAER report. Sediment in the water makes it harder for fish to breathe and can smother their eggs. In addition, over time, increased sediment in streams and rivers can disrupt salmon migration because salmon use their sense of smell to navigate to their native stream to spawn, and sediment can mask that smell. Some stakeholders said they were concerned that the loss of shade from trees might lead to warmer river water, thereby harming salmon. However, a Forest Service biologist said that vegetation near the river has regrown since the fire and there is no indication that the temperature of the river water has increased. The fire may provide some long-term benefits for salmon and other fish species. Specifically, erosion following the fire is likely to increase the supply of downed trees and coarse gravel in streams and rivers, which provide places for fish to lay their eggs and hide, according to a study and a Forest Service biologist. Cultural Resources Some cultural resources—including archaeological sites, historic structures, and areas significant to contemporary Native American tribes—were negatively affected by the Chetco Bar Fire. The Forest Service reported that 130 known and recorded Native American archaeological sites were located within the perimeter of the Chetco Bar Fire, 49 of which the agency characterized as isolated sites containing one to three stone artifacts. The effect of the Chetco Bar Fire on known and recorded sites—and on any cultural sites not previously identified—is not fully known. Following the fire, as part of its BAER report, the Forest Service assessed some of these sites, including a prehistoric Native American village site and an area culturally important to Native American tribes. This report noted a number of cultural artifacts, such as arrowheads and tools, that were discolored by the fire or were displaced or moved during or after the fire by, for example, soil disruption caused by trees falling or roots burning and collapsing. The report also stated additional damage could occur in the future; for example, increased erosion could further damage some cultural sites, and vegetation loss could make artifacts more visible, increasing the potential for looting and vandalism. To help mitigate some of the effects, the Forest Service planted some of the burned area with native grass seed to reestablish ground cover and reduce erosion. In addition to the fire damaging cultural resources, a Forest Service archaeologist said fire suppression activities caused some damage. For example, Native American arrowheads and tools were unearthed when a bulldozer constructed a fireline. The archeologist said that they took precautions to minimize suppression impacts on cultural resources, for instance by avoiding using heavy equipment in areas where cultural resources were known to be located. Agency Comments We provided a draft of this report to the Departments of Agriculture and the Interior for review and comment. In an email dated April 17, 2020, the Forest Service, responding on behalf of the Department of Agriculture, said it generally agreed with the draft report. The Forest Service also provided a technical comment, which we incorporated. The Department of the Interior told us it had no comments on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Map of Timber Harvests and Other Fuel Reduction Activities in the Area of the Chetco Bar Fire Figure 14 shows the timber harvests and other fuel reduction activities— such as thinning vegetation or conducting prescribed burns—done in the area of the Chetco Bar Fire from 2008 through 2017. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Anne-Marie Fennell, (202) 512-3841 or fennella@gao.gov In addition to the individual named above, Jonathan Dent (Assistant Director), Lesley Rinner (Analyst-in-Charge), Elizabeth Jimenez, and Jesse Lamarre-Vincent made key contributions to this report. Philip Farah, Ellen Fried, Richard P. Johnson, John Mingus, Edward J. Rice, Sara Sullivan, and Elizabeth Wood made additional contributions.
Why GAO Did This Study A wildfire known as the Chetco Bar Fire began in the summer of 2017 in southwest Oregon and burned more than 190,000 acres over nearly 4 months. Since the fire began in a national forest, the Department of Agriculture's Forest Service played a key role in managing the firefighting response. Because the fire also threatened other lands, state and private firefighting entities were also involved. GAO was asked to review the Forest Service's response to and the effects of the Chetco Bar Fire. This report describes (1) key events of the Chetco Bar Fire and the Forest Service's firefighting response, (2) key concerns raised by Forest Service officials and stakeholders about the Forest Service's response, and (3) effects of the fire on local communities and resources. GAO reviewed federal documents related to key events and the response, such as incident action plans and daily status summaries; analyzed reports on effects of the fire; and visited burned areas. GAO also interviewed Forest Service, state, and local officials involved in the response, as well as other stakeholders—such as representatives of nongovernmental organizations and community members—to discuss key concerns and effects of the fire. To identify the stakeholders, GAO reviewed documents and interviewed Forest Service officials and stakeholders, who suggested others to interview. What GAO Found The Chetco Bar Fire was first reported in July 2017, burning in the Rogue River-Siskiyou National Forest in Oregon. Because of the remote, steep terrain, initial Forest Service attempts to fight the fire at close range were unsuccessful. The fire grew slowly over the next month. Firefighters, directed by the Forest Service, responded in various ways, such as by constructing “firelines”—clearing vegetation—in an effort to stop the fire's spread. In mid-August, strong, hot winds caused the fire to expand rapidly, from 8,500 acres to more than 90,000 acres over several days, threatening thousands of homes. Firefighters continued constructing firelines and dropped water and retardant on the fire to try to contain it. In September, the weather changed and cooler days and rain moderated the fire. Firefighers fully contained the fire in November (see figure). Forest Service officials and stakeholders raised a number of key concerns about the Forest Service's response to the Chetco Bar Fire. For example, some said that if the Forest Service's response had been more aggressive, it might have kept the fire from growing and threatening homes. Forest Service officials said that in making firefighting decisions, they prioritized firefighter safety and considered the likelihood that a particular response would be successful. The agency has taken steps to improve decision-making for future wildfires, such as developing a tradeoff analysis tool to help decision makers assess firefighting options. Forest Service officials, stakeholders, and documents identifed various effects of the fire. Some of these sources cited negative effects including destruction of six homes, damage to roads and trails, and damage to habitat for the northern spotted owl. However, the fire likely improved habitat for some species, such as woodpeckers that eat beetles that feed on burned trees, according to officials.
gao_GAO-20-159
gao_GAO-20-159_0
441 G St. N.W. Washington, DC 20548 To the Commissioner of Internal Revenue In our audits of the fiscal years 2019 and 2018 financial statements of the Internal Revenue Service (IRS), we found IRS’s financial statements as of and for the fiscal years ended September 30, 2019, and 2018, are presented fairly, in all material respects, in accordance with U.S. generally accepted accounting principles; although internal controls could be improved, IRS maintained, in all material respects, effective internal control over financial reporting as of September 30, 2019; and no reportable noncompliance for fiscal year 2019 with provisions of applicable laws, regulations, contracts, and grant agreements we tested. The following sections discuss in more detail (1) our report on the financial statements and on internal control over financial reporting, which includes required supplementary information (RSI) and other information included with the financial statements; (2) our report on compliance with laws, regulations, contracts, and grant agreements; and (3) agency comments. Report on the Financial Statements and on Internal Control over Financial Reporting In accordance with our authority conferred by the Chief Financial Officers (CFO) Act of 1990, as amended by the Government Management Reform Act of 1994, we have audited IRS’s financial statements. IRS’s financial statements comprise the balance sheets as of September 30, 2019, and 2018; the related statements of net cost, changes in net position, budgetary resources, and custodial activity for the fiscal years then ended; and the related notes to the financial statements. We also have audited IRS’s internal control over financial reporting as of September 30, 2019, based on criteria established under 31 U.S.C. § 3512(c), (d), commonly known as the Federal Managers’ Financial Integrity Act (FMFIA). We conducted our audits in accordance with U.S. generally accepted government auditing standards. We believe that the audit evidence we obtained is sufficient and appropriate to provide a basis for our audit opinions. Management’s Responsibility IRS management is responsible for (1) the preparation and fair presentation of these financial statements in accordance with U.S. generally accepted accounting principles; (2) preparing, measuring, and presenting the RSI in accordance with U.S. generally accepted accounting principles; (3) preparing and presenting other information included in documents containing the audited financial statements and auditor’s report, and ensuring the consistency of that information with the audited financial statements and the RSI; (4) maintaining effective internal control over financial reporting, including the design, implementation, and maintenance of internal control relevant to the preparation and fair presentation of financial statements that are free from material misstatement, whether due to fraud or error; (5) evaluating the effectiveness of internal control over financial reporting based on the criteria established under FMFIA; and (6) its assessment about the effectiveness of internal control over financial reporting as of September 30, 2019, included in the accompanying Management’s Report on Internal Control over Financial Reporting in appendix I. Auditor’s Responsibility Our responsibility is to express an opinion on these financial statements and an opinion on IRS’s internal control over financial reporting based on our audits. U.S. generally accepted government auditing standards require that we plan and perform the audits to obtain reasonable assurance about whether the financial statements are free from material misstatement, and whether effective internal control over financial reporting was maintained in all material respects. We are also responsible for applying certain limited procedures to RSI and other information included with the financial statements. An audit of financial statements involves performing procedures to obtain audit evidence about the amounts and disclosures in the financial statements. The procedures selected depend on the auditor’s judgment, including the auditor’s assessment of the risks of material misstatement of the financial statements, whether due to fraud or error. In making those risk assessments, the auditor considers internal control relevant to the entity’s preparation and fair presentation of the financial statements in order to design audit procedures that are appropriate in the circumstances. An audit of financial statements also involves evaluating the appropriateness of the accounting policies used and the reasonableness of significant accounting estimates made by management, as well as evaluating the overall presentation of the financial statements. An audit of internal control over financial reporting involves performing procedures to obtain evidence about whether a material weakness exists. The procedures selected depend on the auditor’s judgment, including the assessment of the risk that a material weakness exists. An audit of internal control over financial reporting also includes obtaining an understanding of internal control over financial reporting, and evaluating and testing the design and operating effectiveness of internal control over financial reporting based on the assessed risk. Our audit of internal control also considered IRS’s process for evaluating and reporting on internal control over financial reporting based on criteria established under FMFIA. Our audits also included performing such other procedures as we considered necessary in the circumstances. We did not evaluate all internal controls relevant to operating objectives as broadly established under FMFIA, such as those controls relevant to preparing performance information and ensuring efficient operations. We limited our internal control testing to testing controls over financial reporting. Our internal control testing was for the purpose of expressing an opinion on whether effective internal control over financial reporting was maintained, in all material respects. Consequently, our audit may not identify all deficiencies in internal control over financial reporting that are less severe than a material weakness. Definition and Inherent Limitations of Internal Control over Financial Reporting An entity’s internal control over financial reporting is a process effected by those charged with governance, management, and other personnel, the objectives of which are to provide reasonable assurance that (1) transactions are properly recorded, processed, and summarized to permit the preparation of financial statements in accordance with U.S. generally accepted accounting principles, and assets are safeguarded against loss from unauthorized acquisition, use, or disposition, and (2) transactions are executed in accordance with provisions of applicable laws, including those governing the use of budget authority, regulations, contracts, and grant agreements, noncompliance with which could have a material effect on the financial statements. Because of its inherent limitations, internal control over financial reporting may not prevent, or detect and correct, misstatements due to fraud or error. We also caution that projecting any evaluation of effectiveness to future periods is subject to the risk that controls may become inadequate because of changes in conditions, or that the degree of compliance with the policies or procedures may deteriorate. Opinion on Financial Statements In our opinion, IRS’s financial statements present fairly, in all material respects, IRS’s financial position as of September 30, 2019, and 2018, and its net cost of operations, changes in net position, budgetary resources, and custodial activity for the fiscal years then ended in accordance with U.S. generally accepted accounting principles. In accordance with federal accounting standards, IRS’s financial statements do not include an estimate of the dollar amount of taxes that are owed to the federal government but that taxpayers have not reported or that IRS has not identified through its enforcement programs, often referred to as the tax gap, nor do they include information on tax expenditures. Further detail on the tax gap and tax expenditures, as well as the associated dollar amounts, is provided in the other information included with the financial statements. Opinion on Internal Control over Financial Reporting In our opinion, although internal controls could be improved, IRS maintained, in all material respects, effective internal control over financial reporting as of September 30, 2019, based on criteria established under FMFIA. Our fiscal year 2019 audit identified continuing deficiencies concerning IRS’s internal control over unpaid assessments and continuing and new deficiencies concerning IRS’s internal control over financial reporting systems. While not considered material weaknesses, these deficiencies are collectively important enough to merit attention by those charged with governance of IRS. Therefore, we considered these issues affecting IRS’s internal controls over unpaid assessments and financial reporting systems to be significant deficiencies in internal control as of September 30, 2019. These two significant deficiencies are discussed in more detail below. We considered these significant deficiencies in determining the nature, timing, and extent of our audit procedures on IRS’s fiscal year 2019 financial statements. Although the significant deficiencies in internal control did not affect our opinion on IRS’s fiscal year 2019 financial statements, misstatements may occur in unaudited financial information reported internally and externally by IRS because of these significant deficiencies. In addition, because of the significant deficiencies in internal controls over unpaid assessments and financial reporting systems that existed during fiscal year 2019, IRS’s financial management systems did not comply substantially with federal financial management systems requirements as required by the Federal Financial Management Improvement Act of 1996. We will be reporting additional details concerning any new issues relating to these significant deficiencies separately to IRS management, along with recommendations for corrective actions. We also identified other deficiencies in IRS’s internal control over financial reporting that we do not consider to be material weaknesses or significant deficiencies. Nonetheless, these deficiencies warrant IRS management’s attention. We have communicated these matters to IRS management and, where appropriate, will report on them separately along with related recommendations for corrective actions. Further, as we have reported in past audits, IRS continues to face significant ongoing financial management challenges relating to safeguarding taxpayer receipts and associated information, and preventing and detecting fraudulent refunds based on identify theft. Although these challenges do not rise to the level of significant deficiencies in internal control, we believe they are sensitive matters requiring IRS management’s attention. We have made several recommendations to IRS to enhance its internal controls to mitigate these challenges. It is important that IRS continue its efforts to minimize the risks these challenges pose to taxpayers and any associated losses to the federal government. Significant Deficiency in Internal Control over Unpaid Assessments Limitations in the financial systems IRS uses to account for federal taxes receivable and other unpaid assessment balances, as well as other control deficiencies that led to errors in taxpayer accounts, continued to exist during fiscal year 2019. As a result of these deficiencies, IRS’s systems were unable to provide the timely, reliable, and complete transaction-level financial information necessary to enable IRS to appropriately classify and report unpaid assessment balances. As in prior years, IRS used a complex and labor-intensive statistical estimation process to compensate for the effects of its system limitations and other deficiencies on a material portion of its federal taxes receivable balance to help ensure that this balance was free of material misstatement. During fiscal year 2019, IRS recorded adjustments totaling about $17 billion to correct the effects of continued errors in its underlying data that IRS identified during its manual estimation process. While using this process to determine a material portion of taxes receivable has enabled IRS to produce reliable related balances for year- end reporting, it does not provide IRS management with readily available, reliable unpaid assessment information on a daily basis throughout the year in order to effectively manage unpaid assessment balances. Further, errors in taxpayer accounts create a burden for those taxpayers whose accounts were affected. While not collectively considered a material weakness, IRS’s ongoing control deficiencies related to unpaid assessments are important enough to merit attention by those charged with governance of IRS. Therefore, these issues represent a significant deficiency in IRS’s internal control over financial reporting as of September 30, 2019. During fiscal year 2019, IRS documented the key management decisions in the design and use of the estimation process. This step should reduce the risk that IRS may perform sampling procedures inconsistent with management intent or plans. Continued management commitment and sustained efforts are necessary to build on the progress made to date and to fully address IRS’s remaining unresolved issues concerning the management and reporting of unpaid assessments. Significant Deficiency in Internal Control over Financial Reporting Systems During our fiscal year 2019 audit, we determined that unresolved information system control deficiencies from prior audits, along with new control deficiencies pertaining to business process application controls and general controls in IRS’s information systems, collectively represent a significant deficiency in IRS’s internal control over financial reporting systems. Specifically, IRS did not correct control deficiencies we reported as of September 30, 2018, concerning (1) unnecessary access rights granted to accounts, (2) inconsistent monitoring of systems and accounts, (3) out-of-date and unsupported hardware and software, (4) change controls over tax and financial management processing on the mainframe, and (5) developing and implementing effective policies and procedures as part of IRS’s security management program. In addition, during this year’s audit, we found new control deficiencies in the following areas: (1) implementing automated financial controls of interfaces between key applications, (2) ensuring that authorized personnel reviewed key documents for external systems, (3) enforcing multifactor authentication, (4) enforcing adequate encryption to protect systems and data, or (5) ensuring that patches installed on systems were current to protect against known vulnerabilities. The potential effect of these continuing and new deficiencies on IRS’s financial reporting for fiscal year 2019 was mitigated primarily by IRS’s compensating management controls designed to detect potential misstatements on the financial statements. Nevertheless, these application and general control deficiencies increase the risk of unauthorized access to, modification of, or disclosure of sensitive financial and taxpayer data and disruption of critical operations, and are therefore important enough to merit the attention of those charged with governance of IRS. According to IRS management, IRS has developed a plan that focuses on strengthening its information system controls. Continued and consistent management commitment and attention will be essential to addressing existing financial reporting system deficiencies. Other Matters Required Supplementary Information U.S. generally accepted accounting principles issued by the Federal Accounting Standards Advisory Board (FASAB) require that the RSI be presented to supplement the financial statements. Although the RSI is not a part of the financial statements, FASAB considers this information to be an essential part of financial reporting for placing the financial statements in appropriate operational, economic, or historical context. We have applied certain limited procedures to the RSI in accordance with U.S. generally accepted government auditing standards, which consisted of inquiries of management about the methods of preparing the RSI and comparing the information for consistency with management’s responses to the auditor’s inquiries, the financial statements, and other knowledge we obtained during the audit of the financial statements, in order to report omissions or material departures from FASAB guidelines, if any, identified by these limited procedures. We did not audit and we do not express an opinion or provide any assurance on the RSI because the limited procedures we applied do not provide sufficient evidence to express an opinion or provide any assurance. Other Information IRS’s other information contains a wide range of information, some of which is not directly related to the financial statements. This information is presented for purposes of additional analysis and is not a required part of the financial statements or the RSI. We read the other information included with the financial statements in order to identify material inconsistencies, if any, with the audited financial statements. Our audit was conducted for the purpose of forming an opinion on IRS’s financial statements. We did not audit and do not express an opinion or provide any assurance on the other information. Report on Compliance with Laws, Regulations, Contracts, and Grant Agreements In connection with our audits of IRS’s financial statements, we tested compliance with selected provisions of applicable laws, regulations, contracts, and grant agreements consistent with our auditor’s responsibility discussed below. We caution that noncompliance may occur and not be detected by these tests. We performed our tests of compliance in accordance with U.S. generally accepted government auditing standards. Management’s Responsibility IRS management is responsible for complying with laws, regulations, contracts, and grant agreements applicable to IRS. Auditor’s Responsibility Our responsibility is to test compliance with selected provisions of laws, regulations, contracts, and grant agreements applicable to IRS that have a direct effect on the determination of material amounts and disclosures in IRS’s financial statements, and perform certain other limited procedures. Accordingly, we did not test compliance with all laws, regulations, contracts, and grant agreements applicable to IRS. Results of Our Tests for Compliance with Laws, Regulations, Contracts, and Grant Agreements Our tests for compliance with selected provisions of applicable laws, regulations, contracts, and grant agreements disclosed no instances of noncompliance for fiscal year 2019 that would be reportable under U.S. generally accepted government auditing standards. However, the objective of our tests was not to provide an opinion on compliance with laws, regulations, contracts, and grant agreements applicable to IRS. Accordingly, we do not express such an opinion. Intended Purpose of Report on Compliance with Laws, Regulations, Contracts, and Grant Agreements The purpose of this report is solely to describe the scope of our testing of compliance with selected provisions of applicable laws, regulations, contracts, and grant agreements and the results of that testing, and not to provide an opinion on compliance. This report is an integral part of an audit performed in accordance with U.S. generally accepted government auditing standards in considering compliance. Accordingly, this report on compliance with laws, regulations, contracts, and grant agreements is not suitable for any other purpose. Agency Comments In commenting on a draft of this report, IRS stated that it was pleased to receive an unmodified opinion on its financial statements. IRS also commented on its continued efforts to address its financial reporting systems control deficiencies and improve its internal controls in financial reporting of unpaid assessments. The complete text of IRS’s response is reproduced in appendix II. Appendix I: Management’s Report on Internal Control over Financial Reporting Appendix II: Comments from the Internal Revenue Service
Why GAO Did This Study In accordance with the authority conferred by the Chief Financial Officers Act of 1990, as amended, GAO annually audits IRS's financial statements to determine whether (1) the financial statements are fairly presented and (2) IRS management maintained effective internal control over financial reporting. GAO also tests IRS's compliance with selected provisions of applicable laws, regulations, contracts, and grant agreements. IRS's tax collection activities are significant to overall federal receipts, and the effectiveness of its financial management is of substantial interest to Congress and the nation's taxpayers. What GAO Found In GAO's opinion, the Internal Revenue Service's (IRS) fiscal years 2019 and 2018 financial statements are fairly presented in all material respects, and although controls could be improved, IRS maintained, in all material respects, effective internal control over financial reporting as of September 30, 2019. GAO's tests of IRS's compliance with selected provisions of applicable laws, regulations, contracts, and grant agreements detected no reportable instances of noncompliance in fiscal year 2019. Limitations in the financial systems IRS uses to account for federal taxes receivable and other unpaid assessment balances, as well as other control deficiencies that led to errors in taxpayer accounts, continued to exist during fiscal year 2019.These control deficiencies affect IRS's ability to produce reliable financial statements without using significant compensating procedures. In addition, unresolved information system control deficiencies from prior audits, along with application and general control deficiencies that GAO identified in IRS's information systems in fiscal year 2019, placed IRS systems and financial and taxpayer data at risk of inappropriate and undetected use, modification, or disclosure. IRS continues to take steps to improve internal controls in these areas. However, the remaining deficiencies are significant enough to merit the attention of those charged with governance of IRS and therefore represent continuing significant deficiencies in internal control over financial reporting related to (1) unpaid assessments and (2) financial reporting systems. Continued management attention is essential to fully addressing these significant deficiencies. What GAO Recommends Based on prior financial statement audits, GAO made numerous recommendations to IRS to address internal control deficiencies. GAO will continue to monitor and will report separately on IRS's progress in implementing prior recommendations that remain open. Consistent with past practice, GAO will also be separately reporting on the new internal control deficiencies identified in this year's audit and providing IRS recommendations for corrective actions to address them. In commenting on a draft of this report, IRS stated that it continues its efforts to improve its financial reporting systems controls and internal controls over unpaid assessments.
gao_GAO-20-158
gao_GAO-20-158_0
Background U.S.–North Macedonia Relations The United States has maintained a cooperative relationship with North Macedonia across a broad range of political, economic, cultural, military, and social issues since North Macedonia gained its independence from Yugoslavia in 1991. The United States formally recognized North Macedonia in 1994, and the countries established full diplomatic relations in 1995. Following a civil conflict between the country’s ethnic Albanian minority and the Macedonian majority in 2001, the United States and the EU mediated a resolution and supported efforts to agree to a peaceful, political solution to the crisis, known as the Ohrid Framework Agreement. Figure 1 shows Macedonia’s location in southeastern Europe. corridor from Western and Central Europe to the Aegean Sea 2,118,945 (146th largest in the world, 2018 $31.03 billion gross domestic product in 2017 22.4 percent unemployment rate (2017 est.) Macedonian, 64.2 percent; Albanian, 25.2 percent; Turkish, 3.9 percent; Romani, 2.7 percent; Serb, 1.8 percent; other, 2.2 percent (2002 est.) In 2011, USAID and State assessed that North Macedonia’s conservative party, the Internal Macedonian Revolutionary Organization–Democratic Party for Macedonian National Unity (known as VMRO-DPMNE, or VMRO) was consolidating political power when it became the ruling party in 2006. USAID and State found that government control over North Macedonia’s judiciary, Parliament, media, civil society, and local government was increasing. In December 2012, security personnel ejected members of the Social Democratic Union of Macedonia (SDSM), the main opposition party, from the Parliament building, along with journalists who had been observing the session, after SDSM members protested VMRO’s proposed budget. SDSM boycotted Parliament for approximately 2 months after this incident but returned in March 2013, when the parties reached an agreement. In May 2014, SDSM boycotted Parliament again, accusing VMRO of having violated the country’s electoral code in April 2014 elections, in which VMRO retained its parliamentary majority. In December 2014, USAID concluded that inadequate mechanisms for competition and political accountability represented the primary democracy and governance problems in North Macedonia. USAID noted, among other things, that the ruling party had deployed public resources and control of the media to limit competition; captured executive, legislative, and judicial institutions; and put pressure on, and excluded, civil society. North Macedonia’s 2015 Political Crisis In February 2015, the leader of SDSM began releasing phone conversations allegedly recorded by the government’s counterintelligence service that revealed widespread corruption and state capture by the ruling party, VMRO, triggering a political crisis. (See fig. 2 for a timeline of the crisis.) Street protests followed these leaks. The four main political parties invited the United States and EU to facilitate negotiations to broker a peaceful resolution to the crisis, known as the Przino Agreement, in June 2015. The parties agreed to, among other things, hold free and fair elections by the end of April 2016. After two failed attempts to hold elections in early 2016, the United States and EU convened North Macedonia’s political parties for another round of negotiations in the summer of 2016. The parties reached agreement on a number of key reforms and set the conditions for parliamentary elections by the end of 2016. These elections took place on December 11, 2016, without a clear majority winner. Although SDSM leader Zoran Zaev formed a majority coalition in February 2017, then-President Ivanov refused to give Zaev the mandate to form a new government until May 2017, following a violent storming of Parliament by hundreds of protesters in April. In May 2017, President Ivanov authorized SDSM to form a government with a coalition of ethnic Albanian parties. The new coalition government expressed support for North Macedonia’s accession to the EU and membership in the North Atlantic Treaty Organization (NATO). On February 12, 2019, the Republic of Macedonia formally changed its name to the Republic of North Macedonia, ending a longstanding dispute over its name with Greece, which had for years exercised its veto power in NATO to block North Macedonia’s membership (see the text box for details of North Macedonia’s NATO aspirations and name dispute with Greece). On February 6, 2019, NATO members signed an accession protocol with North Macedonia, paving the way for North Macedonia to become the 30th member of NATO. The EU states also opened the path to potential EU accession negotiations with North Macedonia in June 2019, contingent on the country’s full implementation of its agreement with Greece and its demonstrated progress in implementing EU-recommended reforms. However, the EU postponed the decision until no later than October 2019. On February 15, 2019, the U.S. government recognized North Macedonia’s name change. North Macedonia’s NATO Aspirations and Name Dispute with Greece In 2008, having determined that North Macedonia met North Atlantic Treaty Organization (NATO) membership criteria, NATO allies decided that North Macedonia would be invited to join NATO as soon as North Macedonia and Greece, a NATO member, resolved a dispute regarding North Macedonia’s name. A brief timeline of this dispute follows. 1991: The “Republic of Macedonia” declared its independence from the former Yugoslavia. Greece objected to this name, viewing “Macedonia” as representing territorial claims against Greece, which has a northern province by the same name. Because Greece has veto power in NATO, it was able to prevent the Republic of Macedonia from joining the organization. 1995: Greece and the Republic of Macedonia reached an interim accord in which Greece agreed not to block applications by the Republic of Macedonia to international organizations if made under the name “Former Yugoslav Republic of Macedonia.” 2008: At a NATO Summit in Bucharest, Greece blocked the Republic of Macedonia’s bid to join NATO. Dec. 2011: The International Court of Justice ruled that Greece had been wrong to block the Republic of Macedonia’s bid to enter NATO in 2008, but the decision did not affect NATO’s consensus-based decision-making process. June 12, 2018: The foreign ministers of Greece and the Republic of Macedonia signed the Prespa agreement, whereby the Republic of Macedonia would change its name to the Republic of North Macedonia, Greece would no longer object to North Macedonia’s Euro- Atlantic integration, and both countries would promise to respect existing borders. Sept. 30, 2018: The Republic of Macedonia held a referendum on changing its name to the Republic of North Macedonia, with nearly 92 percent of votes in favor of the change. Overall turnout for the referendum was about 37 percent, as opponents of the name change boycotted the referendum. Oct. 19, 2018: A two-thirds majority in North Macedonia’s Parliament voted in favor of the name change. Jan. 11, 2019: North Macedonia’s Parliament approved a constitutional amendment that renamed the country to the Republic of North Macedonia. Jan. 25, 2019: The Greek Parliament voted to approve the deal outlined in the Prespa agreement. Feb. 6, 2019: NATO’s 29 members signed an accession protocol with North Macedonia, paving the way for the country to become the 30th member of the alliance. Feb. 8, 2019: Greece became the first NATO member to ratify the accession protocol. Feb. 12, 2019: The Republic of Macedonia formally changed its name to the Republic of North Macedonia. Feb. 15, 2019: The U.S. government recognized the Prespa Agreement’s entry into force and North Macedonia’s name change. Overview of U.S. Democracy Assistance According to State, democracy assistance seeks to advance freedom and dignity by assisting governments and citizens to establish, consolidate, and protect democratic institutions, processes, and values. These components include participatory and accountable governance, rule of law, authentic political competition, civil society, human rights, and the free flow of information. Democracy assistance falls into six program areas—Rule of Law, Good Governance, Political Competition and Consensus-Building, Civil Society, Independent Media and Free Flow of Information, and Human Rights—each with different program elements. See appendix V for descriptions of democracy program areas and program elements. The U.S. government provides democracy assistance through multiple bureaus and offices in USAID, State, and NED. For a list of these agencies’ roles and responsibilities related to democracy assistance overseas, see table 1. Agency Operational Policies for Assistance Federal laws governing agencies’ use of contracts and grants seek to promote discipline in the selection and use of procurement contracts, grant agreements, and cooperative agreements; maximize competition in making procurement contracts; and encourage competition in making grants and cooperative agreements. USAID’s operational policy, the Automated Directives System, incorporates these requirements into agency guidance. Thus, in selecting recipients of democracy assistance, agency staff are required to guarantee the integrity of the competitive award process by ensuring overall fairness and considering all eligible applications for an award. Strategic Objectives for Democracy Assistance in North Macedonia Since North Macedonia’s separation from Yugoslavia in 1991, the United States has provided democracy assistance to support North Macedonia’s Euro-Atlantic integration and the development of prosperous and democratic institutions. This assistance has focused on promoting rule of law, political processes, citizen engagement, and free media. In light of North Macedonia’s 2015 political crisis, as well as democratic backsliding observed in the years before the crisis, USAID narrowed its assistance goals for the country to focus on more inclusive citizen engagement in civic life, political processes, and the free flow of information to support better functioning checks on executive authority. The USAID mission in North Macedonia’s strategic plan for 2011 through 2015 identified three primary objectives of U.S. democracy assistance in North Macedonia: Promote greater checks and balances in democratic processes by empowering local governments, promoting greater equilibrium among the branches of government at the national level, and promoting political accountability. Develop a basic education system that prepares youth for a modern economy and stable democracy by improving students’ basic skills, expanding workforce skills, and enhancing ethnic integration in the education sector. Increase job-creating private-sector growth in targeted sectors by improving the country’s business environment in critical areas and strengthening key private-sector capacities. Additionally, USAID and State relied on a broader strategic framework, the integrated country strategy, when developing democracy projects in North Macedonia. This interagency, multiyear, overarching strategy outlines U.S. policy priorities and objectives for North Macedonia. Its objectives include improving North Macedonia’s democratic and civil society environment to improve the country’s prospects for joining NATO and for completing accession negotiations with the EU. U.S. Agencies Obligated More Than $45 Million for Assistance for North Macedonia, but Total State Department Obligations Cannot Be Reliably Reported U.S. government agencies obligated more than $45 million in democracy assistance funding for North Macedonia in fiscal years 2012 through 2017, according to agency award documents and data (see table 2). This assistance was provided to support U.S. strategic objectives for North Macedonia, including promoting the rule of law, political processes, citizen engagement, and free media. USAID obligated approximately $38 million, and NED obligated approximately $4.2 million. Additionally, the Public Affairs Section of the U.S. Embassy in Skopje provided about $3.7 million in assistance. However, we are unable to report total State obligations for democracy assistance for North Macedonia because of uncertainty about the reliability of award data from State’s Bureau of International Narcotics and Law Enforcement Affairs (INL). In addition, State’s Bureau of Democracy, Human Rights, and Labor (DRL) provided democracy assistance in North Macedonia solely through regional grants and did not specify which obligated funds were provided for democracy assistance in North Macedonia. See appendixes II through IV for a full list of USAID, NED, and State awards for democracy assistance in North Macedonia in fiscal years 2012 through 2017. USAID Obligated Approximately $38 Million for Democracy Assistance Program Areas USAID provided about $38 million in democracy assistance for North Macedonia in fiscal years 2012 through 2017. As table 3 shows, the majority of USAID funding—approximately $17 million—supported projects in the civil society program area, while more than $7 million supported political competition and consensus building. Several USAID bureaus and offices provided democracy assistance in North Macedonia during that period. The Bureau for Democracy, Conflict, and Humanitarian Assistance and the Bureau for Europe and Eurasia provided such assistance through contracts, grants, and cooperative agreements. According to agency documents, USAID supported U.S. foreign policy in North Macedonia by promoting democracy and respect for the rule of law and human rights, through activities such as supporting civil society organizations and developing the capacity of independent media outlets in the country. USAID also promoted political competition and accountability by working with political parties and state institutions to enable an environment for free and fair elections. In addition, USAID’s Office of Transition Initiatives (OTI) provided short- term assistance to groups in the country. OTI established an office in North Macedonia in September 2015 to support reform processes outlined in the Przino Agreement. According to OTI documents, OTI supports U.S. foreign policy objectives by promoting stability, peace, and democracy through fast, flexible, short-term assistance targeted to key political transition and stabilization needs. The office works with civil society organizations, media groups, and government institutions to increase access to reliable information, promote free and open civic discourse, and support democratic reforms. In North Macedonia, OTI funded initiatives such as a televised debate series that presented civil dialogue and diverse viewpoints on issues affecting citizens of North Macedonia. OTI grants have also supported digital media initiatives and civic engagement projects. USAID assistance supported initiatives in a range of democracy program areas. Table 4 shows examples of USAID projects across different program areas, some of which are related to democracy assistance. NED Obligated Approximately $4.2 Million for Democracy Assistance Activities NED awarded 72 grants totaling nearly $4.2 million in North Macedonia in fiscal years 2012 through 2017. Of these, six grants, totaling almost $1.7 million, were awarded to two of NED’s core institutes—the National Democratic Institute and the Center for International Private Enterprise— while 66 grants, totaling about $2.6 million, were awarded to other organizations. In addition, NED awarded 61 grants totaling more than $17.1 million for regional programs that included North Macedonia. NED does not disaggregate cost data by individual country due to the nature of the Balkan regional programs NED supports. Thus, we are unable to report the amounts NED provided in North Macedonia through regional programs during the period of our review. After the onset of the political crisis in 2015, NED focused its democracy assistance in North Macedonia on three program areas: promoting good governance, supporting independent media, and fostering positive interethnic relations. NED grants supported a range of initiatives, including projects to improve investigative reporting on democratic reforms and rule-of-law matters, and to encourage youth leadership and activism. NED’s funding to the National Democratic Institute and the Center for International Private Enterprise supported a range of activities in North Macedonia. The institute worked with the country’s Parliament to improve its management and organization of the legislative process by, among other things, assisting Parliament in reviewing its legislative and oversight procedures. Other National Democratic Institute initiatives included encouraging participation by various groups in the democratic process, including the Roma population, women, and civil society organizations. The Center for International Private Enterprise received funding for one grant devoted to developing youth leadership. State Obligated At Least $3.7 Million for Democracy Assistance, but Some Project-Level Funding Could Not Be Determined Several State offices—U.S. Embassy Skopje, INL, and DRL—provided funding for democracy assistance in North Macedonia, but only the funding provided by the embassy can be reliably reported. The embassy’s Public Affairs Section provided at least $3.7 million in democracy assistance in North Macedonia in fiscal years 2012 through 2017. INL was unable to provide reliable data on obligations on its awards in North Macedonia. DRL obligated more than $2 million to support democracy assistance activities at the regional level but due to the regional nature of its projects, was unable to provide country-level breakdowns of obligations. U.S. Embassy Skopje Provided Democracy Assistance Grants to Organizations in North Macedonia In fiscal years 2012 through 2017, Embassy Skopje’s Public Affairs Section obligated approximately $3.7 million in democracy assistance grants to organizations in North Macedonia. According to State officials, the embassy works with the Coordinator of U.S. Assistance for Europe and Eurasia to allocate democracy assistance and helps align assistance activities with the U.S. strategic goals for North Macedonia. The embassy’s Public Affairs Section also provides democracy assistance through other means, including media training programs, youth engagement projects, speaker programs, and the Democracy Commission Small Grants Program. The embassy granted $1.8 million for 91 grants through the Democracy Commission Small Grants Program in fiscal years 2012 through 2017. According to the embassy, grants through this program, which cannot exceed $24,000, support nongovernmental organizations’ efforts to promote the rule of law, independent media, interethnic community building, the empowerment of women and youth, human rights, and the institutionalization of open and pluralistic democratic political processes. Examples of awards for Democracy Commission grant–funded activities include the following: Women’s Rights Center ($22,900). This award funded a program to strengthen the capacities of organizations that are working with women victims of domestic violence. Civil Lyceum Project ($17,830). This project aimed to mobilize youth in Skopje to become more involved in the civil society sector and to help create young leaders who understand the value of civic engagement and advance democratic values. Way Out ($7,858). This award funded the maintenance and development of the online version of a student magazine. The remainder of the embassy’s Public Affairs Section awards for assistance in North Macedonia supported activities such as youth engagement projects, speakers, and media training programs, which included short-term trips for journalists from North Macedonia to receive training in the United States. INL Project-Level Funding Data Are Unreliable, but INL Reported Bulk Obligations for Democracy Assistance in North Macedonia INL provided democracy assistance to organizations in North Macedonia in fiscal years 2012 through 2017. INL was unable to provide reliable data on project-level obligations; however, it reported bulk obligations for democracy assistance projects that supported efforts to reform North Macedonia’s criminal justice system to meet rule-of-law benchmarks for Euro-Atlantic integration. INL’s assistance in North Macedonia focused on three primary areas: developing the country’s criminal justice system, developing legal professionals’ skills, and professionalizing the police. According to agency officials, this assistance is intended to strengthen North Macedonia’s justice sector and independent institutions. Specific INL activities included assisting with revisions to the criminal procedure code to promote a more adversarial justice system, providing technical advisors and equipment to the Special Prosecutor’s Office, and promoting accountable policing efforts by providing training to local police on crime scene management. In December 2017, we reported that INL funding data for democracy assistance projects were unreliable and we recommended that State identify and address factors that affect the reliability of its democracy assistance data. State concurred with this recommendation. As of July 2019, INL reported continued efforts to improve data quality and reliability, including ensuring that current and future transactions would maintain coding integrity. However, officials stated that, because of missing codes or miscoded items, they were unable to provide reliable data on obligations for INL awards for democracy assistance projects in North Macedonia for fiscal years 2012 through 2017. Although we determined that data for specific INL democracy awards were unreliable, INL reported providing bilateral assistance of approximately $14.2 million in North Macedonia in fiscal years 2012 through 2017, including $6.9 million for democracy assistance. However, we did not independently verify that INL provided this amount of bilateral assistance. DRL Funded Regional Democracy Assistance Awards That Included North Macedonia DRL funded four awards that benefited North Macedonia in fiscal years 2012 through 2017. However, DRL awarded this assistance at the regional level and does not track country-level obligations for North Macedonia. One regional award with obligations of roughly $300,000 supported a project focusing on Roma populations in Bulgaria, North Macedonia, Romania, and Serbia. A second regional award provided more than $2 million for a project promoting the rule of law in the Balkans. The two remaining DRL awards provided $25,000 to organizations supporting local civil society organizations working to promote human rights. USAID Generally Followed Operational Policy in Selecting Recipients of Democracy Assistance in North Macedonia Our review of 13 USAID grants and cooperative agreements for democracy assistance—representing roughly half of USAID obligations in North Macedonia in fiscal years 2012 through 2017—found that in selecting recipients, the agency generally followed operational policies intended to ensure a fair and transparent selection process. (See table 5 for a list of the awards in our sample.) We found that staff at the USAID mission in North Macedonia generally evaluated applicants against the merit review criteria stated in public notices. We also found that USAID considered and recorded the strengths and weaknesses of applicants in selection committee memorandums for 10 of the 13 awards in our sample. For three awards originating from the same public notice, we were unable to determine, on the basis of available documentation, whether USAID considered the strengths and weaknesses of all applicants. Finally, we found that USAID documented the review procedures it used to assess applicants in selection committee memorandums. USAID Considered Published Merit Review Criteria in Selecting Recipients of Assistance USAID’s selection committee considered merit review criteria that were consistent with those included in the agency’s public notices for 10 of the 13 awards for democracy assistance in North Macedonia that we reviewed. USAID’s process for selecting recipients of assistance for competitive awards requires announcing opportunities, reviewing applications, and making award decisions on the basis of published merit review criteria. USAID announces a grant opportunity by developing a notice of funding opportunity. Merit review criteria are developed by the USAID staff and reflect the agency’s strategic priorities for democracy assistance. After interested parties have submitted applications, a selection committee, also known as a technical evaluation committee, is appointed to review applications. All 13 awards in our sample included merit review criteria in public notices during the concept paper phase of awards, while 10 of the awards included merit review criteria for the full application phase. Many of the awards required selection committees to consider some of the same merit review criteria in assessing applicants. Examples of commonly applied criteria include the following: Technical approach. Reviewers are to assess the extent to which an applicant’s proposed activity is clear, logical, and technically sound and meets the objectives of the funding outlined in the public notice. Management plan and key personnel. Reviewers are to assess the extent to which an applicant considered staffing, roles and responsibilities, and other management issues for their proposed activity. Organizational capacity and past performance. Reviewers are to assess the extent to which the applicant demonstrated the technical and managerial resources and expertise to achieve their program objectives. Reviewers are also to assess the extent to which the applicant demonstrated technical and managerial resources and expertise in past programs and performed satisfactorily in similar programs executed in recent years. We found that in reviewing the 13 awards in our sample, USAID generally applied the criteria published for each award. Six of the 13 awards in our sample were two-phased awards, for which the mission required potential applicants to first submit an executive summary or concept paper for their proposed activity. For these awards, the mission published separate merit review criteria for concept papers and full applications, and selection committees assessed each type of submission against the relevant set of criteria. The selection committee memorandums for three awards showed that these merit review criteria were consistent with the criteria outlined in the public notices for each award. Specifically, in the first phase of the award process, staff at the USAID mission in North Macedonia applied the published criteria for concept papers in reviewing the submitted papers and selected those that best met the criteria. In the second phase for three awards, USAID solicited applications from the selected applicants and applied the published criteria for full applications in reviewing the submitted applications. In the case of three awards that originated from the same public notice, the notice lacked merit review criteria for the full application phase. The public notice for these three awards did not include the merit review criteria the selection committee would use to evaluate full applications. For the remaining seven one-phased awards in our sample, the selection committee memorandums showed that USAID applied the criteria published in the award solicitations in reviewing the applications that were submitted, consistent with USAID’s operational policies. USAID Generally Assessed Strengths and Weaknesses of Applicants for Democracy Awards We found that USAID officials generally assessed applicants’ strengths and weaknesses when reviewing applications for awards for democracy assistance in North Macedonia. USAID operational policy requires selection committees to evaluate the strengths and weaknesses of each applicant for an award relative to the merit review criteria. The committee then prepares a written selection memorandum recording its assessments, which is then sent to the agreement officer. For the 13 awards in our sample, selection committee memorandums show that officials generally considered and recorded their assessments of applicants’ strengths and weaknesses against the criteria outlined in the public notices. For example, in considering the applicants for one award in our sample, the selection committee assessed the strengths and weaknesses of applicants’ technical approaches by looking at the logical connection between their activities and stated objectives, their plans for community outreach, and their awareness of potential problems that might arise over the course of their projects. The committee also assessed applicants’ strengths and weaknesses with regard to management plans and key personnel by considering, among other things, applicants’ plans to train staff, their knowledge of the stakeholders they planned to engage, and the relevant experience of the organizations’ leaders. In addition, the committee assessed applicants’ strengths and weaknesses with regard to organizational capacity and past performance, primarily by examining whether applicants had successfully managed projects of similar magnitude, scope, and sensitivity in recent years. For this award, the selection committee provided an overall score for each criterion based on the numerical scoring outlined in the award’s public notice and ultimately recommended the top-scoring applicant to the agreement officer. For three of the six two-phased awards we reviewed, selection committee officials considered and recorded their assessments of applicants’ concept papers as well as the full applications they received. For three two-phased awards that originated from the same public notice, we could not determine, on the basis of available documentation, whether the selection committee assessed the strengths and weaknesses of applicants relative to the merit review criteria. USAID Recorded Review Procedures, Consistent with Its Operational Policy We found that USAID documented its review procedures, consistent with USAID policy. USAID operational policy requires that the selection committee include in its review documentation a discussion of its procedures for reviewing awards. For all 13 awards, the selection committee memorandums included a discussion of the review procedures that the committee used to assess applicants. These review procedures included actions such as the following: The establishment of the selection committee, including its purpose and composition A requirement for selection committee members to sign a certificate regarding nondisclosure, conflict of interest, or rules of conduct Individual reviews of the applications by each selection committee member A review of the rating system the committee used to assess A joint meeting to discuss individual reviews and ratings of applications, resulting in consensus among selection committee members about the strengths and weaknesses of each application For the two-phased awards in our sample, the selection committee memorandums include documentation of review procedures for both the concept paper and full application phases of awards. The selection committee memorandum for the full application phase of these awards included other actions that the selection committee took, such as the following: A summary of the committee’s procedures and results in the concept An evaluation of the proposals from applicants who were invited to A discussion of the programmatic weaknesses that USAID asked applicants to address before submitting their full applications We are sending copies of this report to the USAID Administrator, the Secretary of State, and the President of NED. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) U.S. funding for democracy assistance in North Macedonia in fiscal years 2012 through 2017 and (2) the extent to which the U.S. Agency for International Development (USAID) adhered to relevant operational policies in selecting recipients of democracy assistance in North Macedonia. To identify the United States’ strategic objectives and goals for providing democracy assistance in North Macedonia, we reviewed USAID and Department of State (State) strategic documents and interviewed cognizant USAID and State officials in Washington, D.C. To examine U.S. funding for democracy assistance in North Macedonia, we analyzed award data from USAID, State, and the National Endowment for Democracy (NED) for fiscal years 2012 through 2017, the most recent 5-year period for which these data were available. To determine the data’s reliability, we interviewed agency officials and reviewed relevant documentation. We determined that USAID’s and NED’s data were sufficiently reliable for the purposes of our reporting objectives. We further determined that State’s data on the U.S. Embassy in Skopje’s Public Affairs Section awards were reliable for these purposes. However, on the basis of interviews with State officials, our review of their data, and our prior work, we determined that the data maintained by State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) could not be reliably reported. We determined that data provided by State’s Bureau of Democracy, Human Rights, and Labor Affairs (DRL) were reliable; however, we could not determine what portion of DRL funding went only to North Macedonia, because DRL made regional awards during this period that benefited several Balkan countries. Therefore, we report State obligations as approximations for awards for which we had more reliable data. To identify the recipients of democracy assistance in North Macedonia and describe the process through which the U.S. government grants such assistance, we reviewed award data, relevant award documents, and bilateral agreements and other communications between the United States and North Macedonia regarding this assistance. We interviewed USAID, State, and NED officials in Washington, D.C., who oversee democracy assistance in North Macedonia regarding U.S. funding for such assistance. We also interviewed representatives of organizations that implement this assistance that have offices in Washington, D.C. In addition, during audit work in Skopje, North Macedonia, we interviewed USAID and State officials who manage democracy assistance. We also met with officials from the government of North Macedonia, including the Minister of Defense and members of Parliament, the State Election Commission, and the Agency for Audio and Audiovisual Services, to determine the types of activities the U.S. government supported during the period of our review. In addition, we conducted individual and group interviews with representatives of 41 implementing partners of USAID, State, and NED in Skopje who received funding during the period of our review. To assess the extent to which USAID officials followed operational policies in selecting recipients of democracy assistance, we analyzed award data and documentation for a sample of awards made between fiscal years 2012 through 2017. We excluded from our sample any contracts and other awards for which no public notice was issued, because these awards were not openly competed. We further excluded grants under contract arrangements that USAID entered into with local partners in North Macedonia, because these awards also were not openly competed. Such awards include those made by USAID’s Office of Transition Initiatives and under the Consortium for Elections and Political Process Strengthening process. Our sample comprised the 13 largest- value grants and cooperative agreements that USAID made for North Macedonia in fiscal years 2012 through 2017, constituting 46 percent of all USAID obligations in North Macedonia during this period. We analyzed USAID operational policies contained in the Automated Directives System (ADS) and other USAID policy documents outlining the agency’s strategic plan and assistance priorities for North Macedonia. We analyzed relevant documents for the awards in our sample, including the notices of funding opportunity and selection committee memorandums, and we assessed the extent to which these documents showed that USAID had met the requirements of its operational policy outlined in the ADS. In particular, for each award, we examined the extent to which the merit review criteria published in the notice of funding opportunity matched the criteria the selection committee used, the selection committee assessed the strengths and weaknesses of the submitted applications and recorded these assessments, and the selection committee included a discussion of its review procedures in its review documentation. Finally, we interviewed USAID officials in Washington and Skopje regarding USAID’s operational policies in fiscal years 2012 through 2017 as well as its process for selecting recipients of democracy assistance. We conducted this performance audit from May 2017 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: U.S. Agency for International Development Democracy Assistance in North Macedonia Table 6 lists the U.S. Agency for International Development’s (USAID) awards for democracy assistance in North Macedonia in fiscal years 2012 through 2017. Appendix III: National Endowment for Democracy Assistance in North Macedonia Table 7 lists the National Endowment for Democracy’s (NED) democracy assistance awards in North Macedonia in fiscal years 2012 through 2017. Appendix IV: Department of State Democracy Assistance in North Macedonia Tables 8 and 9 list the Department of State’s (State) awards for democracy assistance to North Macedonia in fiscal years 2012 through 2017. These awards were provided by U.S. Embassy Skopje through its Public Affairs Section. Table 8 shows the grants that the embassy’s Public Affairs Section awarded through the Democracy Commission Small Grants Program, and table 9 shows other, non–Democracy Commission grants awarded by the Public Affairs Section. Appendix V: Democracy Assistance Program Areas and Program Elements Table 10 provides an overview of the program areas and program elements that fall into democracy, human rights, and governance assistance according to the Department of State (State). U.S. foreign assistance is categorized through a system called the Standardized Program Structure and Definitions, which comprises broadly agreed-on definitions for foreign assistance programs and provides a common language to describe programs. According to this system, democracy assistance includes the following six program areas. Appendix VI: Comments from the U.S. Agency for International Development Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Rob Ball (Assistant Director), Cheryl Goodman (Assistant Director), Rachel Dunsmoor (Analyst-in- Charge), Parul Aggarwal, R. Gifford Howland, Ashley Alley, Justin Fisher, Christopher Keblitis, and Reid Lowe made key contributions to this report.
Why GAO Did This Study Since fiscal year 1991, the United States has provided over a billion dollars in assistance to North Macedonia. In recent years, USAID and State have expressed concern about an erosion of democracy in the country. These concerns were heightened by the onset of a political crisis in February 2015, when the then-opposition party released phone conversations revealing alleged corruption in the ruling party. This crisis prompted the four major political parties to invite the United States and the European Union to help broker an agreement. The parties later agreed to hold early parliamentary elections in December 2016. Though the opposition party formed a majority coalition, the President refused to give the opposition leader a mandate to form a new government until May 2017, after protesters violently attacked North Macedonia's Parliament. This report examines (1) U.S. government funding for democracy assistance in North Macedonia and (2) the extent to which USAID adhered to relevant policies in selecting recipients of democracy assistance in North Macedonia. GAO analyzed U.S. government data and documents and interviewed U.S. officials in Washington, D.C., and in Skopje, North Macedonia. What GAO Found The U.S. government provided more than $45 million for democracy assistance in North Macedonia through the U.S. Agency for International Development (USAID), National Endowment for Democracy (NED), and U.S. Department of State (State) in fiscal years 2012 through 2017. During this 5 year period—the most recent for which funding data were available—USAID obligated about $38 million to support rule of law and human rights, governance, political competition and consensus building, civil society, and an independent media and free flow of information. NED—a nongovernmental organization funded largely through appropriated funds—provided $4.2 million for activities such as training in investigative reporting and rule of law. The U.S. embassy in Skopje obligated at least $3.7 million for rule of law and human rights, governance, and civil society. State's Bureau of International Narcotics and Law Enforcement Affairs (INL) and Bureau of Democracy, Human Rights, and Labor (DRL) also provided funding for democracy initiatives. However, GAO is unable to report State's total obligations, because INL's data were unreliable and because DRL, due to the regional nature of its projects, does not track country-level obligations for North Macedonia. Legend: USAID = U.S. Agency for International Development, NED = National Endowment for Democracy, State = U.S. Department of State. Note: Only obligations from the Public Affairs Section of the U.S. Embassy in Skopje are shown for State. State's other funding data were either unreliable or not tracked at the country level. GAO's review of 13 USAID democracy assistance awards, representing roughly half of USAID obligations in fiscal years 2012 through 2017, found that the agency generally complied with operational policy intended to ensure a fair and transparent selection process. USAID policy requires officials to consider merit review criteria specified in public notices and to assess applicants against these criteria. GAO found that the merit review criteria USAID included in public notices were generally consistent with the criteria that selection committees used to evaluate applicants. GAO also found that selection committees generally discussed the relative strengths and weaknesses of award applications and recorded these discussions in selection memorandums, consistent with USAID policy. What GAO Recommends In prior work, GAO recommended that State identify and address factors affecting the reliability of INL's democracy assistance data. State concurred and, in July 2019, reported that INL was continuing efforts to improve data reliability. GAO will continue to monitor State's efforts to ensure this recommendation is fully implemented.
gao_GAO-20-53
gao_GAO-20-53_0
Background Reducing transportation-related fatalities and serious injuries has consistently been DOT’s top priority. Traffic fatalities and serious injuries may result from unsafe driver behaviors, such as speeding and alcohol- or drug-impaired driving, or from the design or condition of the road and its accompanying infrastructure. Within DOT, both NHTSA and FHWA are charged with reducing fatalities and serious injuries on the nation’s highways and, respectively, provide grant funding to states to mitigate the behavioral and infrastructure-related causes of vehicular crashes. NHTSA provided over $600 million in fiscal year 2018 to state highway safety offices through the Highway Safety Grants Program for activities designed to improve traffic safety by modifying driver behavior. For example, states may use NHTSA grant funding for efforts to increase seatbelt use, or to reduce impaired driving. FHWA provided about $2.6 billion in fiscal year 2018 to state departments of transportation through the Highway Safety Improvement Program (HSIP) for projects to improve safety on all public roads. HSIP funds can be used for infrastructure projects, such as rumble strips, and other projects such as road safety audits, safety planning, and improving safety data. States are allowed to transfer up to 50 percent of their HSIP safety apportionment made available each fiscal year to the other core FHWA highway programs. For example, from 2013 through 2018, 24 states transferred HSIP safety funding totaling over $1 billion to other core programs and three states transferred approximately $600 million into their HSIP safety program from other core programs. Over the last decade, the federal government has taken steps to move toward a performance-based framework for traffic safety funding. Historically, most federal surface transportation funds were distributed through formulas that often had no relationship to outcomes or grantees’ performance. In 2008, we recommended that Congress consider integrating performance-based principles into surface transportation programs such as NHTSA’s Highway Safety Grants Program and FHWA’s HSIP to improve performance and accountability in states’ use of federal funds. In particular, we noted that tracking specific outcomes that are clearly linked to program goals can provide a strong foundation for holding grant recipients responsible for achieving federal objectives and measuring overall program performance. The Moving Ahead for Progress in the 21st Century Act, enacted in 2012, formally required the Secretary of the Department of Transportation to, among other things, establish performance measures for states to use to assess fatalities and serious injuries to ensure further accountability for federal traffic safety funding provided to states. See table 1 for a complete list of mandatory performance measures. States are also required to establish targets annually for each of the performance measures and measure progress toward these targets. NHTSA first required states to develop targets for their performance measures as part of their planning for fiscal year 2014, and FHWA first required states to establish targets for their performance measures set in 2017 for calendar year 2018. Starting with these targets, state highway safety offices and departments of transportation were required by both NHTSA and FHWA to set identical targets for the three common performance measures in both frameworks. Both NHTSA’s and FHWA’s frameworks provide flexibility to states in how they may establish targets and emphasize using data to develop realistic and achievable targets rather than aspirational ones that reflect a long-term vision for future performance. Because the frameworks do not require a specific reduction in fatalities or serious injuries, states may set targets that are higher or lower than their historical averages depending on state-specific factors, such as population increases or economic conditions. As a result, targets may reflect either an anticipated increase or decrease in fatalities or serious injuries. NHTSA and FHWA require states to submit annual plans and reports to establish targets and describe their use of federal funds to improve safety and the results they have achieved relative to their targets. (See table 2.) NHTSA requires that states submit an annual Highway Safety Plan to, among other things, set targets, identify projects they will implement in the upcoming fiscal year, and describe how they will use funds from the Highway Safety Grants Program. States are also required to submit an Annual Report to NHTSA that includes an assessment of the state’s progress in achieving safety performance targets in the previous fiscal year. States are required to submit an HSIP report to FHWA that describes, among other things, how they have used federal HSIP funding for highway safety improvement projects during the prior reporting period as well as performance targets for the upcoming calendar year. In addition to the annual requirements, FHWA requires a Strategic Highway Safety Plan from states every 5 years that identifies a state’s key safety needs and long-term goals, and guides investment decisions to reduce fatalities and serious injuries. NHTSA and FHWA rely on states and localities to collect and report fatality and serious injury data used in the performance framework. In addition to providing information through annual plans and reports, states report traffic fatalities to NHTSA’s FARS database, which tracks all fatal traffic crashes nationwide. When a fatal crash occurs, a state or local police officer completes a crash report form unique to each state. These forms can include a variety of data fields, such as the time of the crash, weather conditions, and the number of killed or injured persons. FARS analysts—state employees who are trained by NHTSA’s data validation and training contractors—use the data in crash report forms to compile a record of the fatal crash. However, NHTSA’s collection and validation of these data may take up to 24 months following the end of a calendar year before it is finalized. FARS also contains serious injury data associated with fatal crashes, though neither NHTSA nor FHWA maintain a database of all serious injuries. Rather, the agencies rely on states and localities to collect and store records of serious injuries resulting from traffic crashes and report this information to them each year. Based on data the states and localities provide, NHTSA estimates the number of total injuries resulting from crashes to track overall national trends. States’ Overall Achievement of Fatality and Serious Injury Targets Is Unclear due to Incomplete Reporting and Data Limitations States Did Not Achieve Most of Their NHTSA Fatality Targets from 2014 through 2017, and NHTSA and States Do Not Fully Report Progress and Communicate Results From 2014 through 2017, states did not achieve about two-thirds of the targets they set for the required fatality performance measures, according to our analysis of state-reported NHTSA data. In addition, for a majority of the fatality performance measures required by NHTSA, these data show that the number of targets states achieved generally decreased from 2014 through 2017. (See table 3.) Over this same time, fatalities increased nationwide by 13 percent from about 33,000 in 2014 to over 37,000 in 2017. NHTSA officials said that fewer states achieved their targets over this time because fatalities increased nationwide over the same period due to increases in vehicle miles traveled and corresponding exposure to driving-related risks. Officials from the 10 states we selected said that achieving targets often depends on factors outside of their control, such as demographic and economic factors, as well as changes to state laws. Demographic factors. Officials from eight of the 10 selected states said that demographic factors such as increases or decreases in population affect traffic safety. For example, officials from one state said that when companies expanded in the state, the population increased rapidly and the economy improved and led to more driving. Officials from another state noted that the increasing population in the state’s urban areas has increased the number of pedestrian fatalities. Economic factors. Officials from seven of the 10 selected states noted that economic factors such as low unemployment can affect traffic safety. For example, officials in one state said that fatalities decreased during the 2009 recession, but when the economy began to improve and more people were employed, fatalities increased. These officials noted that the number of people driving is also affected by gas prices because when prices increase, people drive less. Changes to state laws. Officials from eight of the 10 selected states said that changes in state laws can affect whether a state meets its targets. For example, officials from one state said fatalities increased beginning in 2012 when the state legislature passed a law allowing the operation of a motorcycle without a helmet, and continued to increase through 2017 when the state legislature increased the speed limit on some roads from 70 to 75 miles per hour. These officials also noted that they expect fatalities in their state to further increase as a result of the recent legalization of the recreational use of marijuana. However, the extent to which states achieve targets does not necessarily reflect whether the number of fatalities has increased or decreased over time. First, states that achieved fatality targets did not necessarily experience reduced traffic fatalities. For example, for the 2017 targets, state-reported NHTSA data shows that 10 of 52 states achieved their target for the pedestrian fatalities performance measure, but five of these 10 states also experienced an increase in pedestrian fatalities compared to their 2012 through 2016 historical average. These data also show that the remaining 42 states did not achieve their total fatality target. Second, some states have experienced a decrease in traffic fatalities while not achieving their targets. For example, state-reported NHTSA data shows that 31 states did not achieve their targets for the speeding-related fatalities performance measure. However, these same data show that 11 of these 31 states decreased the total of number of these fatalities over their 2017 target period compared to their 2012 to 2016 average. Further, states that established targets that represented an increase in fatalities from historical averages (increasing targets) were more likely to achieve them than states that established targets that represented a decrease or no change in fatalities compared to their historical averages (decreasing targets), according to state-reported NHTSA data. Specifically, in 2017, for all of the required fatality performance measures, these data show that states that set increasing fatality targets relative to their historical 2012 to 2016 average achieved them at a higher rate than states that set targets that represented a decrease or no change to the number of fatalities (See fig.1.) For example, for the total fatality performance measure, eight states set increasing targets relative to their historical 2012 to 2016 average, while 44 states set decreasing or unchanged targets relative to their averages. However, these data show that six of the eight states with increasing targets for the total fatalities performance measure achieved them, while only three of the 44 states with decreasing or unchanged targets achieved theirs. In response to statute, NHTSA requires states to assess and report progress in achieving targets in the following year’s Highway Safety Plan and the NHTSA Annual Reports each year. Such an approach is consistent with federal standards for internal control, which state that agencies should communicate quality information, including about activities and achievements. According to NHTSA officials, state evaluations of their progress in these plans and reports are designed to be an interim assessment of a state’s progress. For example, because fatality data can take up to 2 years to be recorded by states in FARS and validated by NHTSA, final FARS data are not available when states are required to report on the achievement of the prior fiscal year’s targets in their Highway Safety Plans. Therefore, NHTSA encourages states to use state data to conduct this assessment or provide a qualitative analysis of the progress made in achieving these targets when FARS data are not available. Upon review of these reports, NHTSA publishes them on its website. While NHTSA has established requirements for states to provide assessments of their progress on achieving the prior year targets in their Highway Safety Plans and Annual Reports, we found that many states have not done so. For example, in the 2019 Highway Safety Plans submitted to NHTSA in July 2018, a third of states (19 of 52) did not provide an assessment of the progress they had made in achieving the fatality targets established in their 2018 Highway Safety Plans. Similarly, in the 2018 Annual Reports, submitted to NHTSA in December 2018, half of states (26 of 52) did not provide an assessment of whether they had made progress toward achieving the fatality targets established in their 2018 Highway Safety Plans. Instead, many of these states assessed progress for an earlier year or performance period. NHTSA officials acknowledged that some states are not clear on which target years to assess in their Highway Safety Plans and Annual Reports. NHTSA officials stated that they work closely with states to review the contents of the Highway Safety Plans and Annual Reports. To do so, NHTSA has developed guides to help its staff review Highway Plans and the Annual Reports to ensure states meet requirements to provide assessments of their progress. NHTSA officials stated they expect most states to comply with the requirements to assess progress in future Annual Reports and Highway Safety Plans because states will be more familiar with the reporting requirements. However, NHTSA has had similar requirements for states to provide in-progress assessments in these documents for a number of years. For example, the requirement to report on progress achieving highway safety performance measure targets identified in the Highway Safety Plans in the Annual Report was introduced in 2013. Similarly, NHTSA’s regulations have also required states to include an assessment of their progress in meeting state performance targets in their Highway Safety Plans since 2013. Without additional clarification from NHTSA to states on which target years to assess in their Highway Safety Plans and Annual Reports, NHTSA and other stakeholders may lack a timely understanding of the progress states have made in achieving their targets. NHSTA could provide such clarification through outreach to states, or by providing guidance on NHTSA’s website. Beyond the required interim state assessments of progress contained in the Annual Reports and Highway Safety Plans, NHTSA does not communicate to the public and other stakeholders about whether states eventually achieve their fatality targets. Federal standards for internal control state that agencies should communicate quality information, including about activities and achievements, so that external parties–such as Congress and other stakeholders–can help realize agency goals and objectives. NHTSA officials said that they have reported on states’ achievement of fatality targets in the past. For example, NHTSA previously reported to Congress in 2017 on states’ achievement of the fatality targets established in the 2014 and 2015 Highway Safety Plans in response to a statutory requirement. However, NHTSA did not provide this report to other stakeholders, and it has not subsequently reported to Congress or the general public on whether states achieved targets. NHTSA officials told us they did not have any plans to develop a similar report in the future because the requirement to report to Congress was repealed in January 2019. NHTSA was directed by statute in January 2019 to provide information on its website on state performance relative to the targets in the Highway Safety Plan. The statute broadly directs NHTSA to report on state performance and does not specifically direct NHTSA to communicate whether states eventually achieve their performance targets. NHTSA officials told us that this effort was in its initial stages and NHTSA is still in the process of determining how to meet the statutory requirement. By improving external communication of states’ achievement of fatality targets, NHTSA could give stakeholders better insight into the results states and NHTSA have achieved in their efforts to reduce fatalities and hold states more accountable for their use of federal safety funds. NHTSA could provide such information to all stakeholders through its planned website or by developing an alternative mechanism to convey this information. States’ Achievement of Serious Injury Targets Is Unclear, and Consistent Data Will Not Be Available for Some Time We were not able to determine the extent to which states achieved NHTSA serious injury targets from 2014 through 2017 because states’ definitions of “serious injury” have changed over time. As a result, state serious injury data used to set targets and analyze results may not be comparable year to year over this time period. NHTSA officials noted that changes to serious injury definitions can affect the total number of serious injuries recorded by the states. Similarly, officials from the Association of Transportation Safety Information Professionals told us that based on their experience, when there is a change to how serious injury data are defined or collected by states, total serious injury numbers in that state may change by up to 15 percent the following year. In some cases, changes to serious injury totals may be more extensive. For example, in 2016, one state changed its definition as part of implementing a new database to store crash records. After this change, the number of serious injuries nearly doubled from the previous year. NHTSA and FHWA have taken steps to standardize how states define and report serious injury data. In 2016, both FHWA and NHTSA set out requirements for all states to use a specific definition of serious injury by April 15, 2019, establishing a single national standard definition that will be used under both NHTSA’s and FHWA’s performance management framework. This standard includes requirements for states to integrate this definition into their practices for collecting and recording serious injury data. According to NHTSA and FHWA, this standard will ensure consistent, coordinated, and comparable data at the state and national levels and will assist stakeholders in addressing highway safety challenges. Moreover, according to officials from the Association of Transportation Safety Information Professionals, adoption of this standard will be an improvement upon the previous approaches used by states to define serious injuries. However, it will take time for states to adopt this standard and collect consistent data under the new national standard for serious injuries to use in the NHTSA’s and FHWA’s performance management frameworks. First, NHTSA’s and FHWA’s regulations require that states establish 5-year averages for serious injury targets; however, according to states’ most recent reporting, many states have only recently adopted NHTSA and FHWA’s national standard for defining serious injuries. Specifically, based on our review of information submitted by states in their 2018 HSIP reports, we found that 18 states had reported that they were fully compliant with the national standard as of the end of August 2018. FHWA officials told us that, based on their review of the information in the 2018 HSIP reports, they estimated that an additional 22 states planned to fully align their serious injury definition with requirements in the national standard by April 2019, and that the remaining 12 states had not indicated if they would be compliant with the national standard by that time. FHWA officials said they would conduct a compliance assessment in fall 2019 to determine whether states fully adopted the national standard. Second, data collected under previous, differing definitions cannot be retroactively converted to equivalent data under the definition established by the national standard, and thus it will take time to develop a consistently defined set of serious injury data. Specifically, for those states that have adopted the new standard in the last year, it may be 4 to 5 years until a 5-year average of serious injury data under the new standard can be reported, while the transition period may be longer for those states that have yet to adopt the standard. For example, the American Association of State Highway and Transportation Officials noted that if a state was not currently using the national standard, it would take a lengthy and resource-intensive effort to adopt the standard, including changing reporting processes, guidance, and training. State officials we interviewed also said the costs of updating software and paper forms to collect and store serious injury information, and of training state officials to collect serious injury data using the national standard, could further delay implementation. NHTSA and FHWA have taken steps to assist states with the transition to the new national standard for serious injuries. For example, in preparation for issuing the regulations, NHTSA and FHWA published state-specific guidance to help states adopt an interim standard before the national standard took effect in 2019. According to NHTSA and FHWA officials, this guidance, which aligned states’ existing definitions with a scale for injury severity, helped states provide more consistent serious injury statistics prior to implementing the new national standard in the FHWA rulemaking. While this interim standard helps improve consistency of the definition of serious injury within a state, it does not standardize the specific definition across all states as does the new national standard. In addition, NHTSA and FHWA developed an outreach program and training to help states adapt to the new requirement prior to implementation in 2019. While the transition occurs and until states have collected 5 years of data under the new national standard for serious injuries, NHTSA and FHWA plan to take different approaches to assessing states’ progress toward serious injury targets and communicating the results of their assessments. NHTSA officials told us that they would wait to assess progress until the states had adopted a consistent set of data under the national standard for serious injuries. NHTSA officials also noted that they did not assess whether states achieved their serious injury targets in NHTSA’s 2015 and 2017 reports to Congress, because of limitations with the data that the new standard seeks to mitigate. However, once the transition to the new national standard for serious injuries is complete, similar to state fatality targets, NHTSA does not have a formal mechanism for communicating whether states eventually achieve their serious injury targets. Communication of states’ achievement of both fatality and serious injury targets could help NHTSA hold states more accountable for their use of federal funds. In contrast, as directed by statute and regulations, FHWA plans to evaluate whether each state has met or made “significant progress” toward meeting both the fatality and serious injury-related targets by improving upon the state’s historical 5-year baseline for four of the five required performance measures. As directed by statute and FHWA’s regulations, states that FHWA determines either have not met their 2018 targets or not made significant progress are required to develop an implementation plan to describe how they will achieve targets in future years. Further, these states must use a portion of these states’ fiscal year 2021 HSIP funding exclusively for HSIP projects and may not transfer this portion of their HSIP funding to other core highway programs. Once FHWA’s evaluation of state progress is complete, it plans to communicate the extent to which states achieve these targets on its website, which contains information on the 5-year averages that make up the baseline, targets, and results, and tracks this information over time. FHWA officials said that, as states transition to the new national standard for serious injuries, the use of data collected under multiple definitions in a state may occur in future assessments of significant progress as states collect 5 years of data under the national standard. However, FHWA officials said that states will be able to take the limitations in the data into consideration and adjust targets each year as needed to minimize the risk that states’ results will vary significantly from their targets. An official from the Association of Transportation Safety Information Professionals said that he expects states may recalculate targets to account for changes in the data over the transition to the national standard for serious injuries, but that states have not expressed concerns about doing so. More broadly, FHWA officials also stated that modifying its approach for the transition period would require additional rulemakings by both FHWA and NHTSA, which could be a lengthy process and thus may not be completed before most states collect 5 years of data under the new standard. States Have Not Fully Incorporated Performance Measures and Targets into Traffic Safety Funding Decisions, but NHTSA and FHWA Are Taking Steps to Assist States Over Half of States Use Performance Measures and Targets to Make Funding Decisions under NHTSA’s Framework, and NHTSA Is Taking Steps to Improve Reporting Officials from a majority of the states we surveyed reported that the performance measures and targets in the NHTSA framework influenced which projects they selected to fund to improve traffic safety and reduce fatalities and serious injuries. (See fig. 2.) For example, officials from two states we surveyed reported that the performance measures helped them identify emerging traffic safety trends, such as higher rates of speeding; as a result, the states directed more funding to projects addressing those issues. Officials from another state noted that the performance measures have led them to develop new projects to reduce cyclist and pedestrian fatalities, in addition to their traditional projects targeting impaired driving or seat belt use. In addition, other state officials responded that setting targets influenced their project selection by requiring staff to identify and fund projects that would have a positive effect on the targets established. When NHTSA developed the performance measures for states, it noted that, in addition to helping states monitor and evaluate their progress, performance measures can be used to allocate resources towards the most pressing safety issues. Officials from 19 states we surveyed said that the performance measures in the NHTSA framework did not influence their project selection. Similarly, officials from 23 states said the targets did not influence their project selection. Officials we surveyed cited a variety of reasons for why they did not use this performance information to select projects. For example, officials from three of these states said their states already had a data-driven or performance-based approach to project selection. Officials from one state explained that the NHTSA performance measures provide them with a general overview of safety trends in the state, but that they rely on more detailed data analysis of safety trends in different localities to select projects. Officials from another state said they do not use the specific targets to select projects, because they look for ways to decrease fatalities, not to achieve a specific number of fatalities in a given year. Officials from another state explained that they receive limited safety funding and therefore select projects to make sure they are eligible to qualify for NHTSA grants. NHTSA officials acknowledged that the performance management framework can pose challenges for some states, but noted that they provide technical assistance and guidance to help states make the best use of their performance information. State officials reported other safety benefits from NHTSA’s performance framework in addition to improved project selection. Specifically, officials from almost three-quarters of states we surveyed said the NHTSA framework helped them to improve highway safety in their state. For example, officials from five states we surveyed reported that the framework has improved how they identify highway safety problems, such as by formalizing a data-driven approach to highway safety in their state. Officials we surveyed also noted that by requiring states to reach agreement on some NHTSA and FHWA targets, the framework helped them to increase collaboration with other highway safety stakeholders in the state. For example, officials from one state reported that the collaboration between the state department of transportation and highway safety office has increased their awareness of how physical road improvements and behavioral projects can work together to improve safety in the state. Officials from the 14 states who reported that the framework has not helped them improve safety cited various reasons, including that they used data-driven approaches prior to NHTSA’s framework and that the framework has increased their administrative burden. NHTSA officials agreed that the framework imposed some administrative burdens on states, but stated that the benefits of using a performance-based approach to manage state highway safety programs outweighed any costs for states. To ensure that the framework helps states to improve traffic safety, NHTSA regulations require states to include at least one performance measure (and associated target) for each program area contained in their Highway Safety Plans. These requirements are consistent with federal standards for internal control that agencies should establish and operate activities to monitor the internal control system. Such monitoring activities should be built into the agency’s operation. We found 49 states included performance measures with all the program areas in their 2019 Highway Safety Plans. For example, one state uses the number of motorcyclist fatalities and unhelmeted motorcyclist fatalities as performance measures for its motorcycle safety program area. The remaining three states included performance measures for at least 80 percent of their program areas. By requiring states to establish performance measures for their program areas, NHTSA can help ensure states have appropriate performance measures in place to evaluate whether they are achieving the objectives of their highway safety programs. NHTSA’s regulations also require states to describe the linkage between the countermeasure strategies—the safety initiatives a state plans to fund to address highway safety problems—and the performance targets in their Highway Safety Plans. Requiring states to link their funding decisions with their targets aligns with a leading practice for performance management we have previously identified: that agencies should use performance information to allocate resources. We examined the sections of 2019 Highway Safety Plans where states are prompted to provide this linkage, and found, however, that less than a third of states (12 of 52) described all the linkages between their performance targets and the countermeasure strategies in those sections. NHTSA officials noted that states are directed to submit similar information in other locations throughout the plans, and that NHTSA’s review of the 2019 plans credited states with making these linkages by considering information in other sections of the plan. NHTSA has taken steps this year to improve states’ reporting and its own review of the 2020 Highway Safety Plans. For example, NHTSA officials told us that they have held in-person meetings with state highway safety officials to emphasize the need to provide linkages between their targets and countermeasures in their 2020 Highway Safety Plans. NHTSA officials said they have also held training in 2019 for staff who review these plans to ensure states adhere to reporting requirements. Specifically, during the training, NHTSA officials said they provided guidance to staff on reviewing Highway Safety Plans; this guidance prompts reviewers to check whether states link their countermeasure strategies with targets, and to provide feedback to states that have not provided these linkages. As a result of these actions, NHTSA anticipates that states will more clearly identify linkages in their 2020 plans. Some States Use Performance Measures and Targets for Funding Decisions under FHWA’s Framework, and the Agency Is Developing Guidance to Assist States While states recently began setting performance measure targets under FHWA’s framework in 2017, officials from about a third of states we surveyed reported that performance measures in FHWA’s framework influenced their decisions about which infrastructure-based safety projects to fund. (See fig. 3.) Slightly fewer respondents said the targets they set influenced their project selection. These states reported that this performance information influenced their decision making in different ways. For example, officials from one state reported funding more pedestrian and bicycle safety projects as a result of the trends indicated by the performance measures. Officials from another state said they have shifted to selecting projects that can be constructed quickly in order to reach their annual safety targets. Officials from about two-thirds of states we surveyed said the performance measures and performance targets did not influence their HSIP project selection. Instead, many of these state officials reported that the FHWA performance framework has not changed their project selection methodology, and that they used alternative data-driven approaches to select highway projects. For example, officials from four states reported that they used their 5-year Strategic Highway Safety Plans, which highlight traffic safety issues to guide project selection. In other cases, state officials reported that they continued to use a data- driven approach, such as cost-benefit analysis or crash data analysis, to maximize safety benefits and select the most cost-effective highway safety projects. This approach is consistent with a recent FHWA survey of state departments of transportation, which reported that most states used their 5-year Strategic Highway Safety Plans and cost to prioritize projects. Federal guidelines, including those at FHWA, encourage the use of cost- benefit analysis for selecting infrastructure projects. We have also previously reported that such analysis can lead to better-informed transportation decisions. According to FHWA officials, performance management is not intended to supplant the use of other data-driven project selection methods, but to complement and be integrated into existing methods. To help further this synthesis, FHWA officials told us that they are developing a guide to better explain how states can incorporate the use of performance measures into existing methods, such as cost-benefit analysis, to select projects and achieve their safety targets. FHWA officials expect to issue this guide by January 2020. Overall, a slight majority of states we surveyed (27 of 52) reported that FHWA’s performance framework assisted them in improving safety. Officials cited safety benefits beyond improved project selection, such as increased awareness of highway safety issues for state leaders and the public; and increased collaboration with other highway safety agencies within the state. State officials who did not find the framework helpful cited various reasons. For example, some state officials we surveyed said they were already using performance measures prior to FHWA’s framework. Other officials surveyed said FHWA’s performance framework was not helpful because they have a “Vision Zero” or a “Toward Zero Deaths” policy in their state. According to these officials, under such a policy, the state’s goal is to achieve zero traffic fatalities. Officials from a state with such a policy explained that setting a target to achieve any fatalities was not acceptable to the public or the state because it suggests that not every life is important. FHWA officials said that setting annual targets, however, can ensure states are on track to reach their long-term goals, such as to reduce fatalities to zero. To encourage states to integrate the performance framework into their other safety plans, FHWA regulations require states to link their performance measure targets to the long-term goals in their 5-year Strategic Highway Safety Plans. States must provide a description in their HSIP reports of how each target supports these goals. FHWA has developed and issued a template for the HSIP report that prompts states to describe the link between their targets and their Strategic Highway Safety Plans’ goals. However, about half of the states did not describe how all of their targets support their Strategic Highway Safety Plans’ goals in their 2018 HSIP report, and thirteen of these states did not describe these linkages for any of their targets. In response to our analysis, FHWA officials have taken additional actions to improve states’ HSIP reporting. Specifically, FHWA officials provided training to staff and state officials that referenced our analysis that states did not describe the linkages between targets and long-term goals in their HSIP reports. During the training, FHWA officials emphasized the importance of including such information as states prepare their 2019 HSIP reports. Additionally, FHWA officials said they are updating the guide its staff uses to review HSIP reports to ensure states are describing how the targets they set support their Strategic Highway Safety Plan’s goals. Conclusions In light of the large number of fatalities that occur each year on the nation’s highways and the billions of federal dollars DOT provides annually to states to improve traffic safety, the ability to assess the outcomes of federal surface transportation safety programs and hold grant recipients accountable for results is critical. NHTSA and FHWA have made great strides over the last decade in moving to a performance-based approach for traffic safety funding to improve accountability for federal funds. The results, however, that states have achieved under these frameworks are not always clear. For example, NHTSA has required states to report on their interim progress achieving targets, but states have not had clear direction on what results to assess. In addition, NHTSA lacks a formal mechanism to communicate whether states have been achieving the targets set under their framework. Without improved communication of progress, Congress will be limited in its ability to hold NHTSA and states accountable for their use of federal funds. Moreover, improved reporting of states’ achievements under NHTSA’s framework could help provide insight into the effectiveness of the overall federal traffic safety program. Recommendations for Executive Action We are making two recommendations to NHTSA: The NHTSA Administrator should provide direction and clarification to states to ensure compliance with requirements to assess and report progress made in achieving fatality targets. (Recommendation 1) The NHTSA Administrator should develop and implement a mechanism that communicates to Congress and other stakeholders whether states achieve their fatality and serious injury targets. (Recommendation 2) Agency Comments We provided a draft of this report to DOT for comment. In its comments, reproduced in appendix III, DOT stated that it concurred with our recommendations. DOT also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Transportation, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Susan Fleming at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Survey of State Highway Safety Offices on NHTSA’s Performance Management Framework The questions we asked in our survey of state Highway Safety Offices and the aggregate results of the responses to the closed-ended questions are shown below. Our survey was comprised of closed- and open-ended questions. We do not provide results for the open-ended questions. We sent surveys to 52 state highway safety offices about the National Highway and Traffic Safety Administration’s (NHTSA) performance framework from the 50 states, Puerto Rico and the District of Columbia. We received responses from 50 state highway safety offices, for a 96 percent response rate. For more information on our survey methodology, see page 4 of this report. Q1a. NHTSA has implemented a performance management framework that requires states to set targets for highway safety performance measures and to track their progress towards meeting those targets. Generally speaking, has NHTSA’s highway safety performance framework assisted you in improving highway safety in your state? Q1b. Why has NHTSA’s highway safety performance framework assisted or not assisted you in improving highway safety in your state? (Written responses not included.) Q2a. Each year, states use Highway Safety Plan (HSP) funding and select projects to address identified highway safety problems. How much, if at all, has NHTSA’s highway safety performance framework changed your state’s current approach to selecting HSP projects? Q2b. In what ways, if any, has NHTSA’s highway safety performance framework changed your state’s current approach to selecting HSP projects? (Written responses not included.) Q3a. Thinking about your state’s current HSP program, how much, if at all, did NHTSA’s required highway safety performance measures influence which projects your state selected? Q3b. In what ways, if any, have NHTSA’s required performance measures influenced which HSP projects your state selected? (Written responses not included.) Q4a. Thinking again about your state’s current HSP program, how much, if at all, did the specific targets your state set for NHTSA’s required performance measures influence which projects your state selected? Q4b. In what ways, if any, have the specific targets your state set for NHTSA’s required performance measures influenced which HSP projects your state selected? (Written responses not included.) Appendix II: Survey of State Departments of Transportation on FHWA’s Performance Framework The questions we asked in our survey of state departments of transportation and the aggregate results of the responses to the closed- ended questions are shown below. Our survey was comprised of closed- and open-ended questions. We do not provide results for the open-ended questions. We surveyed 52 state departments of transportation about the Federal Highway Administration’s (FHWA) performance framework from the 50 states, Puerto Rico and the District of Columbia. We received responses from all 52 state departments of transportation, for a 100 percent response rate. For more information on our survey methodology, see page 4 of this report. Q1a. FHWA has implemented a performance management framework that requires states to set targets for highway safety performance measures and to track their progress towards meeting those targets. Generally speaking, has FHWA’s highway safety performance framework assisted you in improving highway safety in your state? Q1b. Why has FHWA’s highway safety performance framework assisted or not assisted you in improving highway safety in your state? (Written responses not included.) Q2a. Each year, states use Highway Safety Improvement Program (HSIP) funding and select projects to address identified highway safety problems. How much, if at all, has FHWA’s highway safety performance framework changed your state’s current approach to selecting HSIP projects? Q2b. In what ways, if any, has FHWA’s highway safety performance framework changed your state’s current approach to selecting HSIP projects? (Written responses not included.) Q3a. Thinking about your state’s current HSIP program, how much, if at all, did FHWA’s required highway safety performance measures influence which projects your state selected? Q3b. In what ways, if any, have FHWA’s required performance measures influenced which HSIP projects your state selected? (Written responses not included.) Q4a. Thinking again about your state’s current HSIP program, how much, if at all, did the specific targets your state set for FHWA’s required performance measures influence which projects your state selected? Q4b. In what ways, if any, have the specific targets your state set for FHWA’s required performance measures influenced which HSIP projects your state selected? (Written responses not included.) Appendix III: Comments from the Department of Transportation Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Sara Vermillion (Assistant Director); Matt Voit (Analyst-in-Charge); Carl Barden; Caitlin Cusati; Timothy Guinane; Geoffrey Hamilton; Georgeann Higgins; Catrin Jones; Jesse Mitchell; Joshua Ormond; Kelly Rubin; and Laurel Voloder made key contributions to this report.
Why GAO Did This Study Over 37,000 people were killed in traffic crashes on the nation's highways in 2017. Within the U.S. Department of Transportation (DOT), two agencies—NHTSA for behavioral factors and FHWA for highway infrastructure—provide about $3 billion annually to states for programs to improve traffic safety. To ensure that states are held accountable for these funds, NHTSA and FHWA developed performance management frameworks that require states to use performance measures and targets in tracking traffic fatalities and serious injuries. GAO was asked to review NHTSA's and FHWA's traffic safety performance management frameworks. This report examines the extent to which: (1) states have met fatality and serious injury targets, and NHTSA's and FHWA's approaches to assessing states' achievements, and (2) states have used performance measures and targets to make traffic safety funding decisions. GAO analyzed state-reported targets and NHTSA data from 2014 through 2017—the most recent data available—for all 50 states, the District of Columbia, and Puerto Rico; surveyed these states on the use of performance measures and targets; reviewed requirements in NHTSA's and FHWA's frameworks; and interviewed officials from NHTSA, FHWA, and 10 states, selected to obtain a mix of population sizes, geographic locations, and other factors. What GAO Found From 2014 through 2017, states did not achieve most of the fatality-related targets they set under the National Highway Traffic Safety Administration's (NHTSA) performance management framework (see table), and the number of serious injury targets states achieved during this period is unclear. GAO did not assess whether states achieved targets they set under the Federal Highway Administration's (FHWA) framework because the data were not yet available. State officials we interviewed said that achieving fatality targets may depend on factors outside their control, such as demographic, economic, and legislative changes. GAO's analysis of states' reports showed that nearly half of states did not provide the required assessment of progress to NHTSA on their most recent set of fatality targets. While NHTSA has taken steps to improve its review of these reports, officials acknowledged states are not clear on which target years to assess. Further, NHTSA lacks a mechanism to report whether states eventually achieve these targets. As a result, NHTSA and other stakeholders have limited insight into the results states have achieved from their use of federal safety funds. The extent to which states achieved serious injury targets is unclear because states have changed their definitions of serious injury over time. To ensure the consistency of these data, NHTSA and FHWA established a standard definition for reporting serious injuries, which states are in the process of adopting. In a survey that GAO administered, officials from a majority of states said that performance measures informed how they selected projects under NHTSA's framework. GAO found, however, that in the 2019 plans submitted by states to NHTSA, less than a third of states reported how performance targets and funded projects were linked. Since the submission of those plans, NHTSA has provided training and guidance to its staff to ensure future plans will more clearly identify these links. Under FHWA's framework, about one-third of states reported in GAO's survey that performance measures influenced their project selection; the remaining two-thirds reported using an alternative data-driven approach, such as cost-benefit analysis. FHWA officials said they are developing guidance to help states integrate performance measures and targets into methods that states are currently using to select highway safety projects. What GAO Recommends GAO recommends that NHTSA (1) provide additional direction and clarification to ensure states assess and report progress in meeting fatality targets, and (2) report on states' final achievement of targets. DOT concurred with the recommendations.
gao_GAO-19-660T
gao_GAO-19-660T_0
Background The chemical industry relies on the use of natural resources as inputs to make chemical products, and the industry’s outputs, in turn, can have an impact on the environment. The International Trade Administration of the Department of Commerce identifies the chemical industry as one of the largest manufacturing industries in the United States, with more than 10,000 companies producing more than 70,000 products. The term ‘sustainability’ can have many interpretations depending on the context in which it is used. Sustainability may refer to economic, environmental, or social sustainability. Achieving all three—a concept known as the “triple bottom line”—has become a goal of some businesses, including many in the chemical industry. Mitigating the potential negative health and environmental consequences of chemical production requires thoughtful design and evaluation throughout the life cycle of chemical processes and products —that is, a thorough assessment of effects resulting from stages of the life cycle such as sourcing the raw materials, processing raw materials into products, handling and disposal of by-products and industrial waste, product use, and end-of-life disposal or recycling (see fig. 1). Attempting to improve one stage of the life cycle without considering the others runs the risk of moving sustainability problems around rather than solving them. Analyzing the full life cycle of a process or product can reveal benefits as well as trade-offs or unintended consequences of different choices along the way. Legal Framework Consistent with the goals of sustainable chemistry, which include making chemicals in a purposefully more environmentally benign way, several federal requirements and directives address chemical and other risks to public health and the environment. For example, EPA’s ability to effectively implement its mission of protecting public health and the environment is critically dependent on credible and timely assessments of the risks posed by chemicals. Such assessments are the cornerstone of scientifically sound environmental decisions, policies, and regulations under a variety of statutes, such as the Toxic Substances Control Act (TSCA) (as amended), which provides EPA with authority to obtain information on chemicals and to regulate those that it determines pose unreasonable risks; the Safe Drinking Water Act (SDWA) (as amended), which authorizes EPA to regulate contaminants in public drinking water systems; and the Federal Food, Drug, and Cosmetic Act (as amended), which authorizes the Food and Drug Administration to oversee the safety of food, drugs, medical devices, and cosmetics. The Federal Acquisition Regulation generally requires that federal agencies advance sustainable acquisition by ensuring that 95 percent of new contract actions for the supply of products and for the acquisition of services meet certain sustainability goals. Supply, Demand, and Economics Various economic factors influence the development of sustainable products. Consumers are increasingly seeking products that help them reduce their own environmental footprints, and companies are responding by developing products made with safer chemicals and by increasing the use of recycled, biobased, and renewable materials. The supply of such products can be influenced by the costs of production, competitive advantage, and reputational effects. For example, if a more sustainable product or process helps a firm differentiate from another firm and creates a competitive advantage that consumers recognize and value, it will enable firms to create more sustainable products. There are a number of inherent challenges in the market for sustainable products in the industry. For example, substantial upfront costs coupled with uncertainty about consumer demand may be a barrier to entering the market. If the benefits of taking a more sustainable approach are valued by consumers, companies may be able to recoup the higher costs by charging higher prices without reducing demand. However, if the benefits are not easily understood and measureable (e.g., long-term health benefits), or are external to consumers (e.g., broad environmental impacts), then consumers may not be willing to pay higher prices for more sustainable products. In addition to market incentives that encourage firms to produce more sustainable products, government entities can, when appropriate, take actions such as subsidies, award programs, or tax credits, or limits, bans, and taxes. Governments may also provide environmental and health- related information to help guide the choices of consumers, workers, downstream users, and investors. For new markets and investments to be realized, sufficient information is needed on the environmental damage and health hazards that can be associated with some chemicals and the possibilities that exist to develop alternatives that overcome these challenges. Stakeholders Vary in How They Define and Assess the Sustainability of Chemical Processes and Products In February 2018, we reported that stakeholders vary in (1) how they define sustainable chemistry, (2) how they assess sustainability, and (3) which environmental and health factors they considered most important. Most companies that responded to our survey agreed that a standardized set of factors for assessing sustainability would be useful. Definitions of Sustainable Chemistry Stakeholders do not agree on a single definition of sustainable chemistry. In total, we asked 71 representatives of stakeholder organizations how they or their organization defines sustainable chemistry. The most common response we received was that sustainable chemistry includes minimizing the use of non-renewable resources. Other concepts that stakeholders commonly associated with sustainable chemistry included minimizing the use of toxic or hazardous chemicals, considering trade- offs between various factors during each phase of the life cycle, minimizing energy and water use, and increasing biodegradability or recyclability. Based on a review of the literature and stakeholder interviews, we identified several common themes underlying what sustainable chemistry strives to achieve, including: improve the efficiency with which natural resources—including energy, water, and materials—are used to meet human needs for chemical products while avoiding environmental harm; reduce or eliminate the use or generation of hazardous substances in the design, manufacture, and use of chemical products; protect and benefit the economy, people, and the environment using consider all life cycle stages including manufacture, use, and disposal (see fig. 1) when evaluating the environmental impact of a product; and minimize the use of non-renewable resources. Approaches for Assessing Sustainability Stakeholders such as chemical companies, federal agencies, and others use many different approaches for assessing the sustainability of chemical processes and products. While the varying approaches provide flexibility to meet the priorities of the user, the lack of a standardized approach makes it very difficult for customers, decision makers, and others to compare the sustainability of various products to make informed decisions. Some companies and organizations design their own approaches for assessing chemical sustainability and use those approaches to make internal decisions on product design and processing, while others use metrics, chemical selection guides, or third-party certifications and assessment tools that are common to their industry. For example, chemical companies use several established metrics to measure their efficiency in using materials to generate products. The variety of metrics used—and variation in the underlying factors included in their calculation—hinders the ability of companies and others to compare the sustainability of chemical processes or products. In addition to common metrics, some sectors have developed guides that companies and others can use to compare the sustainability of materials used in chemical processes, including solvent selection guides and reagent guides. Solvent selection guides assess solvents based on a variety of sustainability criteria, such as environmental, health, and safety impacts; recyclability; and regulatory concerns. One pharmaceutical company reported a 50 percent decrease in the use of certain hazardous solvents after the introduction of a solvent selection guide. NGOs, federal agencies, and professional associations are also developing product certification programs and assessment tools. Certification programs set minimum criteria that products must meet to be certified, such as biodegradability, toxicity, performance, or water usage. Certifying bodies make databases of certified products publicly available and allow manufacturers to affix certification labels or logos to their products. Environmental and Health Factors Considered Most Important Companies prioritize various environmental and health factors differently when assessing sustainability, according to our survey of 27 companies. We asked respondents to indicate the relative importance their company gives to each of 13 environmental and health factors by comparing a pair of factors and selecting the factor they considered more important to optimize, even if that benefit came at the expense of the other factor. For example, a company might compare “energy use” with “water use” and determine that it was more important to their company to maximize the sustainability benefit relative to the “energy use” of a process even if it resulted in less sustainable use of water. We found that, overall, “toxicity of the product” was the most important factor for the companies surveyed and “percentage of renewable or biobased content” was the least important factor when making trade-offs (see fig. 2). However, there were sizable differences between companies and sectors regarding which factors they considered most important to optimize. For a more detailed description of our analysis, see our report Chemical Innovation: Technologies to Make Processes and Products More Sustainable. The Importance of a Standard Definition and Metrics for Sustainability The literature and the results of our interviews and survey indicate that the lack of a standard definition for sustainable chemistry, combined with the lack of standard ways of measuring or assessing sustainability, hinder the development and adoption of more sustainable chemistry technologies. It is difficult for consumers, purchasers, policymakers, and even manufacturers to compare the sustainability of one process or product with another when such processes and products are assessed using different metrics that incorporate different factors. In addition, while there were sizable differences between the companies that responded to our survey with regard to which environmental and health factors they considered most important to prioritize, most agreed that it would be useful to have a standardized set of factors for assessing sustainability across their industry sector and (to a lesser degree) across the entire industry. Technologies Can Make Chemical Processes and Products More Sustainable There are many technologies available and in development that can improve chemical sustainability at each stage of the chemical life cycle. Our February 2018 report focused on three categories: catalysts, solvents, and continuous processing. Because each chemical process or product has unique requirements, there is no one-size-fits-all solution to sustainability concerns. Catalysts Catalysts are used to make chemical processes run faster or use less material. One common application is the catalytic converter in an automobile, where the catalyst converts pollutant gases in the exhaust into harmless chemicals. Without catalysts, many everyday items such as medicines, fibers, fuels, and paints could not be produced in sufficient quantities to meet demand. Unfortunately, the most common catalysts— including those used in automobile catalytic converters—are rare, nonrenewable metals such as platinum and palladium. Researchers are working to replace such metals with alternatives, including abundant metals (e.g., iron and nickel) and metal-free catalysts (such as biocatalysts) where possible. For example, in 2016, Newlight Technologies won a Presidential Green Chemistry Challenge Award for developing and commercializing a biocatalyst technology that captures methane (a potent greenhouse gas) and combines it with air to create a material that matches the performance of petroleum-based plastics at a lower cost. Several companies are now using this material to make a range of products, including packaging, cell phone cases, and furniture. Solvents Solvents are key components in chemical reactions. They are used to dissolve other substances so reactions can occur, to separate and purify chemicals, and to clean the equipment used in chemical processes, among other uses. Solvents constitute a large portion of the total volume of chemicals used in industrial chemical processes. However, many conventional solvents are considered hazardous, both to the environment and to human health. There are a variety of alternatives that can be used in some situations, including biobased solvents, less hazardous solvents such as water or ethanol, and solvent-free or reduced-solvent technologies. For example, biobased solvents called citrus terpenes, which are extracted from citrus peel waste, can be used as flavoring agents or fragrances in cleaning products. According to a representative from Florida Chemical, citrus terpenes may be a low-toxicity alternative compared to traditionally used petroleum-based products for the hydraulic fracturing industry’s concerns about contamination of source and groundwater. However, the regionality and seasonality of the citrus supply can present a challenge to production. Continuous Processing Historically, industrial chemicals have been produced mainly using an approach known as batch processing, where the starting materials are combined in a closed vessel or vat and allowed to react, then transferred to the next vat for the next stage of processing while the first vat is cleaned, and the process is repeated with the next batch. This approach can use significant amounts of solvents for cleaning the vats between batches, consume considerable energy, result in potentially long wait times, and create safety risks. An alternative to batch processing is continuous processing, which allows chemical reactions to occur as the reaction mixture is pumped through a series of pipes or tubes where reactions take place continuously. This approach can improve product yield, product quality, reaction time, and process safety while reducing waste and costs. For example, researchers developed a process for manufacturing the active ingredient in medications including Benadryl® and Tylenol® PM using microreactors that minimized waste, reduced the number of purification steps, and reduced production times compared to traditional batch processing. Roles of the Federal Government and Other Stakeholders in Supporting the Development and Use of More Sustainable Chemical Processes and Products The federal government and other stakeholders play a number of roles, sometimes in collaboration, to advance the development and use of more sustainable chemical processes and products. Federal programs support research on the impacts of chemicals on human and environmental health, support the development of more sustainable chemical processes and their commercialization, and aid the expansion of markets for products manufactured with more sustainable chemicals and processes. Other stakeholders play similar roles and some additional roles that contribute to the development and use of more sustainable chemical processes and products. Federal Programs Support Research on the Impacts of Chemicals on Human and Environmental Health Federal programs conduct and fund basic research on the characteristics and biological effects of chemicals, which underpins the development and use of more sustainable chemistry products and processes. Decision makers must have a scientific understanding of the potential harmful impacts of exposure to chemicals in order to effectively minimize the harmful effects of chemicals through regulations and other means, and to assess the regulated community’s compliance with them. Industry needs this information to make informed decisions about the selection, design, and use of more sustainable chemicals in their products and processes, including their impact on workers. Federal programs fund and study the impacts of chemicals on human health and the environment, develop new methodologies for testing and predicting these effects, award grants for research on chemicals and new methodologies, identify more sustainable chemical alternatives, and evaluate the risks of chemicals. (See table 1.) Federal Programs Support the Development and Commercialization of More Sustainable Chemistry Technologies Federal programs also seek to support the development and facilitate the commercialization of new, more sustainable chemistry processes by conducting and funding basic and applied research to develop more sustainable processes and products; providing loan guarantees, grants, and technical assistance to researchers and companies; and recognizing innovative technologies through an award program, among other programs. (See table 2.) Federal Programs Aid Market Growth for Products Made with Sustainable Chemicals and Processes Federal programs also aid market growth for products made with sustainable chemicals and processes by informing consumers about these products and by facilitating their purchase by federal offices. It can be challenging for consumers seeking out more sustainably manufactured products to identify them or verify company claims. Federal programs can help companies seeking to manufacture more sustainable products strive to ensure that their products are differentiated from less sustainable products in order to reach these consumers. For example, federal programs conduct evaluations of the chemical content of products, manage product certification and labeling programs, provide information to consumers and federal purchasers on the chemical content of products, and develop purchasing and sustainability plans to support agency purchase and use of more sustainable products. EPA’s Safer Choice voluntary certification and labeling program helps consumers make informed purchasing decisions and incentivizes manufacturers to select more sustainable chemical alternatives so they can differentiate their products in the market. Industry, Academic Institutions, States, Companies, and Other Stakeholders Support More Sustainable Chemistry Other stakeholders—such as the chemical manufacturing industry, companies and retailers, state governments, academic institutions, and NGOs—also seek to influence the development and use of more sustainable chemistry processes and products through activities such as supporting workforce development and developing tools and resources for industry. These stakeholders may work on collaborative efforts, such as sustainability initiatives and developing industry-specific standards. The chemical industry conducts and supports research into more sustainable chemistry technologies and other activities. Companies and retailers, such as Kaiser Permanente and Target, create demand for more sustainable products from their suppliers by setting sustainability criteria for purchases. Academic institutions conduct research on the impacts of chemicals and sustainable chemistry technologies and train the next generation of chemists and engineers. States seek to protect public health by regulating chemicals in products. NGOs also play a diverse range of roles such as supporting workforce development, facilitating collaboration between other stakeholders, and developing tools and resources for industry. Strategic Implications in the Field of Sustainable Chemistry Sustainable chemistry is an emerging field within the chemical sciences that has the potential to inspire new products and processes, create jobs, and enhance benefits to human health and the environment. Stakeholders offered a range of potential options to realize the full potential of these technologies. However, there are a number of challenges to implementing more sustainable chemistry technologies, including technological, business, and industry-wide and sector-specific challenges. Opportunities The field of sustainable chemistry has the potential to inspire new products and processes, create jobs, and enhance benefits to human health and the environment. Stakeholders noted that much more work is needed to realize its full promise and offered a range of potential options to realize the full potential of these technologies, including the following: Breakthrough technologies in sustainable chemistry and a new conceptual framework could transform how the industry thinks about performance, function, and synthesis. An industry consortium, working in partnership with a key supporter at the federal level, could help make sustainable chemistry a priority and lead to an effective national initiative or strategy. Integrating sustainable chemistry principles into educational programs could bolster a new generation of chemists, encourage innovation, and advance achievement in the field. A national initiative that considers sustainable chemistry in a systematic manner could encourage collaborations among industry, academia, and the government, similar to the National Nanotechnology Initiative. There are opportunities for the federal government to address industry-wide challenges such as developing standard tools for assessment and a robust definition of sustainable chemistry. Federal agencies can also play a role in demonstrating, piloting, and de- risking some technology development efforts. Challenges Stakeholders noted that there are a number of challenges to implementing more sustainable chemistry technologies, including (1) technological and business challenges, (2) industry-wide and sector- specific challenges, and (3) challenges with coordination between stakeholders. One example of a technological challenge is the fact that alternatives to current solvent use can sometimes pose the same inherent toxicity and volatility risks as their conventional counterparts. Alternatives can also vary in supply and quality and can be expensive. Less toxic solvents, such as water, may require specialized equipment, greater energy input, or elevated pressure, and they can be difficult to scale up for industrial use. Companies told us they face many business challenges in implementing sustainable chemistry technologies, including the need to prioritize product performance; weigh sustainability trade-offs between various technologies; risk disruptions to the supply chain when switching to a more sustainable option; and consider regulatory challenges, among others. Stakeholders also noted the challenge of overturning proven conventional practices and acknowledged that existing capital investments in current technologies can create barriers for new companies to enter a field full of well-established players. Our survey and interviews also found that there are several industry-wide and sector-specific challenges to implementing more sustainable chemistry technologies, such as the lack of a standard definition for sustainable chemistry and lack of agreement on standard ways of measuring or assessing it. Without a standard definition that captures the full range of activities within sustainable chemistry, it is difficult to define the universe of relevant players. Without agreement on how to measure the sustainability of chemical processes and products, companies may be hesitant to invest in innovation they cannot effectively quantify, and end users are unable to make meaningful comparisons that allow them to select appropriate chemical products and processes. There is no mechanism for coordinating a standardized set of sustainability factors across the diverse range of stakeholders at present, despite the motivation of some specific sectors to do so. Moreover, although the federal government has worked with stakeholders through its research support, technical assistance, certification programs, and other efforts, there are still gaps in understanding. Many stakeholders told us that without such basic information as a standardized approach for assessing the sustainability of chemical processes and products, better information on product content throughout the supply chain, and more complete data on the health and environmental impacts of chemicals throughout their life cycle, they cannot make informed decisions that compare the sustainability of various products. Sector-specific challenges exist as well. For example, pharmaceutical sector representatives told us that changing the manufacturing process for an already marketed drug triggers a new FDA review, which can result in delays and additional costs—thus discouraging innovation that could make their chemical processes more sustainable. In conclusion, according to stakeholders, transitioning toward the use of more sustainable chemistry technologies requires that industry, government, and other stakeholders work together. As they and others noted, there is a need for new processes that make more efficient use of the resources that are available, reuse products or their components during manufacturing, and account for impacts across the entire life cycle of chemical processes and products. Furthermore, they highlight the importance of disseminating environmental and health-related information to help guide the choices of consumers, chemists, workers, downstream users, and investors to facilitate further progress. They also indicated that momentum in this field will require national leadership in order to realize the full potential of sustainable chemistry technologies. Chairwoman Stevens, Ranking Member Baird, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact me at 202-512-6412 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Karen Howard (Assistant Director), Diane Raynes (Assistant Director), Katrina Pekar-Carpenter (Analyst-in-Charge), Patrick Harner, Summer Lingard-Smith, Krista Mantsch, Anika McMillon, Rebecca Parkhurst, and Ben Shouse. Other staff who made key contributions to the report cited in the testimony are identified in that report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Chemistry contributes to virtually every aspect of modern life, and the chemical industry supports nearly 26 percent of the gross domestic product of the United States. While these are positive contributions, chemical processes and production can have negative health and environmental consequences. Mitigating these potential consequences requires thoughtful design and evaluation of the life cycle effects of chemical processes and products. This testimony—based on a 2018 technology assessment, GAO-18-307 —discusses (1) how stakeholders define and assess the sustainability of chemical processes and products, (2) available or developing technologies to make chemical processes and products more sustainable, (3) the roles of the federal government and others in supporting the development and use of more sustainable chemical processes and products, and (4) opportunities and challenges in the field of sustainable chemistry. For the 2018 report, GAO selected for assessment three technology categories—catalysts, solvents, and continuous processing; interviewed stakeholders from various fields, such as government, industry, and academia; convened a meeting of experts on sustainable chemistry technologies and approaches; and surveyed a non-generalizable sample of chemical companies. What GAO Found Stakeholders vary in how they define and assess the sustainability of chemical processes and products; these differences hinder the development and adoption of more sustainable chemistry technologies. However, based on a review of the literature and stakeholder interviews, GAO identified several common themes underlying what sustainable chemistry strives to achieve, including: improve the efficiency with which natural resources are used to meet human needs for chemical products while avoiding environmental harm; reduce or eliminate the use or generation of hazardous substances, minimize the use of non-renewable resources; and consider all life cycle stages when evaluating a product (see figure). There are many technologies available and in development that can improve chemical sustainability at each stage of the chemical life cycle. GAO identified three categories of more sustainable chemistry technologies—catalysts, solvents, and continuous processing. Catalysts are used to make chemical processes run faster or use less material. Without catalysts, many everyday items such as medicines, fibers, fuels, and paints could not be produced in sufficient quantities to meet demand. However, the most common catalysts—including those used in automobile catalytic converters—are rare, nonrenewable metals such as platinum and palladium. Researchers are working to replace such metals with alternatives, including abundant metals (e.g., iron and nickel) where possible. Solvents are used to dissolve other substances so reactions can occur, to separate and purify chemicals, and to clean the equipment used in chemical processes, among other uses. Solvents constitute a large portion of the total volume of chemicals used in industrial chemical processes. However, many conventional solvents are considered hazardous. There are a variety of alternatives that can be used in some situations, including biobased solvents. An alternative to traditional batch processing is continuous processing, which allows chemical reactions to occur as the reaction mixture is pumped through a series of pipes or tubes where reactions take place continuously. Compared to batch processing, this approach can improve product yield, product quality, and process safety while reducing waste and costs. The federal government and other stakeholders play several roles, sometimes in collaboration, to advance the development and use of more sustainable chemistry technologies. The federal government supports research, provides technical assistance, and offers certification programs, while other stakeholders conduct research, develop industry-specific standards, support workforce development development, and address chemicals of concern in consumer products, among other roles. Strategic Implications While using more sustainable options entails challenges--including technological, business, and industry-wide and sector-specific challenges, the field of sustainable chemistry has the potential to inspire new products and processes, create jobs, and enhance benefits to human health and the environment. Stakeholders identified strategic implications of sustainable chemistry and offered a range of potential options and realize the full potential of these technologies, including the following: Breakthrough technologies in sustainable chemistry and a new conceptual framework could transform how the industry thinks about performance, function, and synthesis. An industry consortium, working in partnership with a key supporter at the federal level, could help make sustainable chemistry a priority and lead to an effective national initiative or strategy. Integrating sustainable chemistry principles into educational programs could bolster a new generation of chemists, encourage innovation, and advance achievement in the field. A national initiative that considers sustainable chemistry in a systematic manner could encourage collaborations among industry, academia and the government, similar to the National Nanotechnology Initiative. There are opportunities for the federal government to address industry-wide challenges such as developing standard tools for assessment and a robust definition of sustainable chemistry. Federal agencies can also play a role in demonstrating, piloting, and de-risking some technology development efforts. According to stakeholders, transitioning toward the use of more sustainable chemistry technologies will require national leadership and industry, government, and other stakeholders to work together.
gao_GAO-19-277
gao_GAO-19-277_0
Background Medicare and Medicaid FFS are federal health care programs, though there are certain distinctions between the programs’ coverage and financing. Medicare coverage policies are generally established at the national level, and the program directly pays providers for services rendered. Medicaid is a federal-state program, and states are provided flexibility to design their coverage policies. State Medicaid agencies pay providers for services rendered, and the federal government and states share in the financing of the program, with the federal government matching most state expenditures. Estimating Improper Payments in Medicare and Medicaid The Improper Payments Information Act of 2002 (IPIA), as amended, requires federal executive branch agencies to report a statistically valid estimate of the annual amount of improper payments for programs identified as susceptible to significant improper payments. To accomplish this, agencies follow guidance for estimating improper payments issued by OMB. According to the HHS-OIG, which conducts annual compliance reviews and regularly reviews the estimation methodology for both the Medicare FFS and Medicaid improper payment measurement programs, the methodology for both programs’ estimates comply with federal improper payment requirements. To estimate improper payments in Medicare and Medicaid FFS, respectively, CMS’s CERT and PERM contractors randomly sample and manually review medical record documentation associated with FFS claims for payment from providers, also known as medical reviews. The CERT and PERM programs project the improper payments identified in the sample to all FFS claims to estimate improper payment amounts and rates for the programs nationally for a given fiscal year. For Medicare, the CERT contractor conducted medical reviews on about 50,000 Medicare claims in fiscal year 2017. For Medicaid, the PERM contractor conducted medical reviews on nearly 31,000 Medicaid claims across fiscal years 2015, 2016, and 2017 to estimate fiscal year 2017 improper payments. Although IPIA, as amended, only requires agencies to develop one improper payment estimate for each identified program, both the CERT and PERM programs also estimate national service-specific improper payment amounts and rates to identify services at high risk for improper payment. Additionally, the PERM program estimates state-level improper payment rates based on the amounts of improper payments identified through medical reviews in each state. The CERT and PERM contractors conduct medical reviews to determine whether claims were paid or denied properly in accordance with program coverage policies—including coverage policies based on statutes, regulations, other CMS coverage rules, and each state’s coverage policies in the case of Medicaid. To perform medical reviews, trained clinicians review documentation—such as progress notes, plans of care, certificates of medical necessity, and physician orders for services—to ensure that claims meet program coverage policies. In general, Medicare and Medicaid documentation requirements define the documentation needed to ensure that services are medically necessary and demonstrate compliance with program coverage policies. For example, Medicare home health services must be supported by documentation demonstrating compliance with the coverage policy that beneficiaries be homebound, among other requirements. Certain coverage policies and documentation requirements were implemented to help reduce the potential for fraud, waste, and abuse. For example, Medicare implemented a requirement that DME providers maintain documentation demonstrating proof of item delivery, to better ensure program integrity. (Figure 1 presents an example of a progress note to support the medical necessity of Medicare home health services. See App. III for additional examples of provider documentation). The CERT and PERM contractors classify improper payments identified through medical review by the type of payment error. Two types of errors are related to documentation—no documentation and insufficient documentation. No documentation: Improper payments in which providers fail to submit requested documentation or respond that they do not have the requested documentation. Insufficient documentation: Improper payments in which providers submit documentation that is insufficient to determine whether a claim was proper, such as when there is insufficient documentation to determine if services were medically necessary, or when a specific, required documentation element, such as a signature, is missing. In fiscal year 2017, insufficient documentation comprised the majority of estimated FFS improper payments in both Medicare and Medicaid, with 64 percent of Medicare and 57 percent of Medicaid medical review improper payments. Improper payments stemming from insufficient documentation in Medicare FFS increased substantially starting in 2009, while insufficient documentation in Medicaid has remained relatively stable since 2011 (see Fig. 2). CMS has attributed the increase in Medicare insufficient documentation since 2009 in part to changes made in CERT review criteria. Prior to 2009, CERT medical reviewers used “clinical inference” to determine that claims were proper even when specific documentation was missing if, based on other documentation and beneficiary claim histories, the reviewers could reasonably infer that the services were provided and medically necessary. Beginning with CMS’s fiscal year 2009 CERT report, in response to 2008 HHS-OIG recommendations, CMS revised the criteria for CERT medical reviews to no longer allow clinical inference and the use of claim histories as a source of review information. More recent policy changes that added to Medicare documentation requirements may have also contributed to the increase in insufficient documentation in Medicare FFS. CMS’s Medicare and Medicaid Contractors Make Multiple Attempts to Contact Providers to Obtain Documentation to Estimate Improper Payments Medicare’s CERT and Medicaid’s PERM contractors make multiple attempts to contact providers to request medical record documentation for medical reviews, and review all documentation until they must finalize the FFS improper payment estimate. The CERT and PERM contractors allow providers 75 days to submit documentation, though providers can generally submit late documentation up to the date each program must finalize its improper payment estimate, known as the cut-off date (See Fig. 3.). Both programs also contact providers to subsequently request additional documentation if the initial documentation submitted by the providers does not meet program requirements. Initial documentation request: The CERT and PERM contractors make initial requests for documentation by sending a letter and calling the provider. After the initial provider request, if there is no response, the contractors contact the provider at least three additional times to remind them to submit the required documentation. If there is no response, the claim is determined to be improper due to no documentation. Claims are also classified as improper due to no documentation when the provider responds but cannot produce the documentation, such as providers that do not have the beneficiary’s documentation or records for the date of service, among other reasons (see Table 1). For referred services, such as home health, DME, and laboratory services, the CERT contractor also conducts outreach to referring physicians to request documentation. For example, for a laboratory claim, the CERT contractor may contact the physician who ordered the laboratory test to request associated documentation, such as progress notes. Conversely, the PERM contractor told us they generally do not contact referring physicians to request documentation. Subsequent documentation request: If a provider initially submits documentation that is insufficient to support a claim, then the CERT and PERM contractors subsequently request additional documentation. In fiscal year 2017, of the 50,000 claims in the CERT sample, the contractor requested additional documentation from 22,815 providers. Providers did not submit additional documentation to sufficiently support 56 percent of the associated claims. For the 3 years that comprise the 2017 Medicaid improper payment rate, of the nearly 31,000 claims in the PERM sample, the contractor requested additional documentation for 5,448, and providers did not submit additional documentation to sufficiently support about 8 percent of the 5,448 claims. In addition to having similar outreach to providers for obtaining documentation, the CERT and PERM contractors also have processes to refer suspected fraud to the appropriate program integrity entity, to ensure the accuracy of medical reviews, and to allow providers to dispute improper payment determinations. Suspected fraud: When CERT and PERM contractors identify claims with evidence of suspected fraud, they are required to refer the claims to other program integrity entities that are responsible for investigating suspected fraud. CERT and PERM contractor officials said that in 2017, the CERT contractor referred 35 claims, and the PERM contractor did not make any referrals. Interrater reliability (IRR) reviews: As a part of their medical review processes, both the CERT and PERM contractors conduct IRR reviews, where two reviewers conduct medical reviews on the same claim and compare their medical review determinations. These IRR reviews ensure the consistency of medical review determinations and processes for resolving differences identified through the IRR reviews. CMS staff said that they also review a sample of the CERT and PERM contractors’ payment determinations to ensure their accuracy. CERT: The contractor performs IRR reviews for at least 300 claims each month, including claims with and without improper payment determinations. PERM: The contractor conducts IRR reviews of all improper payment determinations, except improper payments due to no documentation, and 10 percent of all correctly paid claims in the sample, which combined was about 3,600 claims for the fiscal year 2017 national improper payment rate. Disputing improper payment determinations: Both CERT and PERM contractors have processes in place for disputing the CERT or PERM contractor’s improper payment determinations. These processes involve reviewing the claim, including any newly submitted documentation, and may result in upholding or overturning the initial improper payment determination. Improper payment determinations that are overturned prior to the CERT and PERM contractors’ cut-off dates are no longer considered improper, and estimated improper payment amounts and rates are adjusted appropriately. CERT: Medicare Administrative Contractors, which process and pay claims, may dispute the CERT contractor’s improper payment determinations first with the CERT contractors and then, if desired, with CMS. Additionally, Medicare providers can appeal the CERT contractor’s improper payment determinations through the Medicare appeals process. PERM: State Medicaid officials may dispute the PERM contractor’s improper payment determinations first with the PERM contractor and then, if desired, with CMS. Providers are not directly involved in this process; instead, providers can contact the state to appeal the improper payment determination. Differing Medicare and Medicaid Documentation Requirements May Result in Inconsistent Assessments of Program Risks Differences in Documentation Requirements for Medicare and Medicaid May Result in Differing Improper Payment Rates and Assessments of Program Risks We found that Medicare, relative to Medicaid, had a higher estimated FFS improper payment rate primarily due to insufficient documentation in fiscal year 2017. According to CMS data, across all services in fiscal year 2017, the rate of insufficient documentation was 6.1 percent for Medicare and 1.3 percent in Medicaid, substantially greater than the difference in rates for all other types of errors, which were 3.4 and 1.0 percent, respectively. For home health, DME, and laboratory services, the insufficient documentation rate was at least 27 percentage points greater for Medicare than for Medicaid, and for hospice services, the rate was 9 percentage points greater (see Fig. 4). Differences between Medicare and Medicaid coverage policies and documentation requirements likely contributed to the substantial variation in the programs’ insufficient documentation rates for the services we examined. Among the services we examined, there are four notable differences in coverage policy and documentation requirements that likely affected how the programs conducted medical reviews: face-to-face examinations; prior authorization; signature requirements; and documentation from referring physicians for referred services, as discussed below. Face-to-face examinations. In part to better ensure program integrity, the Patient Protection and Affordable Care Act established a requirement for referring physicians to conduct a face-to-face examination of beneficiaries as a condition of payment for certain Medicare and Medicaid services. States were still in the process of implementing the policies for Medicaid in fiscal year 2017. include narrative information that sufficiently supported that the beneficiary had a life expectancy of less than 6 months. include the certification date span. documentation supporting that the referring physician conducted an examination when certifying the medical necessity of the service. Hospice providers must submit documentation of a face-to-face examination when recertifying the medical necessity of hospice services for beneficiaries who receive care beyond 6 months after their date of admission. (See sidebar for examples of insufficient documentation in Medicare hospice services.) CMS officials told us that documentation requirements for the face-to-face examination policy for home health services in particular led to an increase in insufficient documentation. When initially implemented in April 2011, home health providers had to submit separate documentation from the referring physician detailing the examination and the need for home health services. Beginning January 2015, CMS changed the requirement to allow home health providers to instead use documentation from the referring physician, such as progress notes, to support the examinations. CMS and several stakeholders attributed recent decreases in the home health improper payment rate to the amended documentation requirement (see Fig. 5). agency did not apply to the sampled day of care associated with the claim. health and DME services in Medicaid in 2016; however, the requirement likely did not apply to many claims subject to fiscal year 2017 PERM medical reviews. Medicaid does not have a face-to-face policy for hospice services, and most states we interviewed did not have such policies. (See sidebar for examples of insufficient documentation in Medicaid.) Prior authorization. Medicare does not have the same broad authority as state Medicaid agencies to implement prior authorization, which can be used to review documentation and verify the need for coverage prior to services being rendered. State Medicaid agencies we spoke with credit prior authorization with preventing improper payments from being paid in the first place. CMS has used prior authorization in Medicare for certain services through temporary demonstration projects and models, as well as one permanent program. In April 2018, we found that savings from a series of Medicare temporary demonstrations and models that began in 2012 could be as high as about $1.1 to $1.9 billion as of March 2017. We recommended that CMS take steps, based on its evaluations of the demonstrations, to continue prior authorization. All six of our selected states use prior authorization in Medicaid for at least one of the four services we examined. In particular, all six selected states require prior authorization for DME, and five require prior authorization for home health. Officials from several states noted that they often apply prior authorization to services at high risk for improper payments, and most told us that prior authorization screens potential improper payments before services are rendered. We did not evaluate the effectiveness of states’ use of prior authorization, or review the documentation required by states for prior authorization. (See Fig 6 for an example state Medicaid prior authorization form.) Physician signatures: While both Medicare and state Medicaid agencies require signatures on provider documents to ensure their validity, Medicare has detailed standards for what constitutes a valid signature. physician did not support the medical necessity for the specific type of catheter ordered. variety of situations. For example, illegible signatures and initials on their own are generally invalid, though they are valid when over a printed name. Examples of insufficient documentation in Medicare laboratory Documentation from the referring physician did not support the order or an intent to order the billed laboratory tests. In Medicaid, PERM contractor staff told us that state agencies generally have not set detailed standards for valid signatures, and that reviewers generally rely on their judgment to assess signature validity. Documentation from the referring physician did not support that the beneficiary’s currently has diabetes for a billed laboratory test for the management and control of diabetes. Documentation for referred services. Medicare requires documentation from referring physicians to support the medical necessity of the referred services that we examined—home health, DME, and laboratory services—but Medicaid generally does not require such documentation. Medicare generally requires documentation from the referring physician, such as progress notes, to support the medical necessity of referred services. CMS officials told us that Medicare requires such documentation from referring physicians to ensure that medical necessity determinations are independent of the financial incentive to provide the referred service, particularly as certain referred services are high risk for fraud, waste, and abuse. (See sidebar for examples of insufficient documentation in Medicare home health, DME, and laboratory services.) In Medicaid, documentation requirements to support the medical necessity of referred services are primarily established by states, and states generally do not require documentation, such as progress notes from referring physicians, to support medical necessity. Further, PERM contractor staff told us that they generally do not review such documentation when conducting medical reviews of claims for referred services. Officials from CMS, the CERT contractor, and provider associations told us that Medicare’s documentation requirements for referred services present challenges for providers of referred services to submit sufficient documentation since they are dependent on referring physician documentation to support medical necessity. Some officials further stated that referring physicians may lack incentive to ensure the sufficiency of such documentation, as they do not experience financial repercussions when payments for referred services are determined to be improper. Officials told us that: It is generally not standard administrative practice for laboratories or DME providers to obtain referring physician documentation, and referring physicians may not submit them when the referred services are subject to medical review. For example, laboratories generally render services based solely on physician orders for specific tests, and generally do not obtain associated physician medical records. Referring physicians may not document their medical records in a way that meets Medicare documentation requirements to support the medical necessity of referred services. Officials from a physician organization told us that physicians refer beneficiaries for a broad array of services, and face challenges documenting their medical records to comply with Medicare documentation requirements for various referred services. We previously reported on CMS provider education efforts and recommended that CMS take steps to focus education on services at high risk for improper payments and to better educate referring physicians on documentation requirements for DME and home health services. CMS agreed with and has fully addressed our recommendation. Medicare and Medicaid pay for many of the same services, to some of the same providers, and likely face many of the same underlying program risks. However, because of differences in documentation requirements between the two programs, the same documentation for the same service can be sufficient in one program but not the other. The substantial variation in the programs’ improper payment rates raise questions about how well their documentation requirements help in determining whether services comply with program coverage policies, and accordingly help identify causes of program risks. This is inconsistent with federal internal control standards, which require agencies to identify, analyze, and respond to program risks. CMS officials attributed any differences in the two programs’ documentation requirements to the role played by the states in establishing such requirements under Medicaid, and told us that they have not assessed the implications of how differing requirements between the programs may lead to differing assessments of the programs’ risks. CMS relies on improper payment estimates to help develop strategies to reduce improper payments, such as informing Medicare’s use of routine medical reviews, educational outreach to providers, and efforts to address fraud. Without a better understanding of how documentation requirements affect estimates of improper payments, CMS may not have the information it needs to effectively identify and analyze program risks, and develop strategies to protect the integrity of the Medicare and Medicaid programs. CMS Has Ongoing Efforts to Examine Insufficient Documentation in Medicare and Revise Documentation Requirements CMS’s Patients over Paperwork initiative is an ongoing effort to simplify provider processes for complying with Medicare FFS requirements, including documentation requirements. Although CMS officials said this initiative is intended to help providers meet documentation requirements in both Medicare and Medicaid, current efforts only address Medicare documentation requirements. As part of the initiative, CMS solicited comments from stakeholders through proposed rulemaking on documentation requirements that often lead to insufficient documentation, and CMS officials stated that they have met with provider associations to obtain feedback. The initiative is generally focused on reviewing documentation requirements the agency has the authority to easily update, namely requirements that are based on CMS coverage rules, as opposed to requirements based on statute. Through this initiative, CMS has clarified and amended several Medicare documentation requirements. For example, CMS clarified Medicare documentation requirements for DME providers to support proof of item delivery. As part of another initiative to examine insufficient documentation in Medicare, CMS found that 3 percent of improper payments due to insufficient documentation were clerical in nature in fiscal year 2018. For the CERT’s fiscal year 2018 medical reviews, the CERT contractor classified whether improper payments due to insufficient documentation were clerical in nature—meaning the documentation supported that the service was covered and necessary, had been rendered, and was paid correctly, but did not comply with all Medicare documentation requirements. Such errors would not result in an improper payment determination if the documentation had been corrected. For example, such clerical errors may involve missing documentation elements that may be found elsewhere within the medical records. According to CMS officials, the information gathered on clerical errors may inform efforts to simplify documentation requirements. Specifically, CMS plans to use this information to help identify requirements that may not be needed to demonstrate medical necessity or compliance with coverage policies. CMS said that it does not plan to engage in similar efforts to examine insufficient documentation errors in Medicaid because of challenges associated with variations in state Medicaid documentation requirements and the additional burden it would place on states. Medicaid Medical Reviews May Not Provide Actionable Information for States, and Other Practices May Compromise Fraud Investigations Medicaid Medical Reviews Do Not Provide Robust State-Specific Information; Resulting Corrective Actions May Not Address the Most Prevalent Causes of Improper Payments On a national basis, CMS’s PERM program generates statistically valid improper payment estimates for the Medicaid FFS program. At the state level, however, CMS officials told us that the PERM contractor’s medical reviews do not generate statistically generalizable information about improper payments by service type and, as a result, they do not provide robust state-specific information on the corrective actions needed to address the underlying causes of improper payments. According to CMS, the number of improper payments identified through medical reviews is too small to generate robust state-specific results. In fiscal year 2017, the PERM contractor identified 918 improper payments nationwide out of nearly 31,000 claims subjected to medical reviews. More than half of all states had 10 or fewer improper payments identified through medical reviews in fiscal year 2017, and these made up about 7 percent of total sample improper payments identified through medical reviews (see Table 2). According to CMS officials, estimating improper payments for specific service types within each state with the same precision as the national estimate would involve substantially expanding the number of medical reviews conducted and commensurately increasing PERM program costs. CMS officials also estimated federal spending on PERM Medicaid FFS medical reviews at about $8 million each year, which does not include state costs, the federal share of the state costs, or providers’ costs. Of our six selected states, officials from one state said that data on service-specific improper payment rates at the state level would be useful, though officials had reservations about increasing sample sizes because of the resources involved in doing so. CMS requires state Medicaid agencies to develop corrective actions to rectify each improper payment identified. However, since the Medicaid review sample in a state typically is not large enough to be statistically generalizable by service type, the identified improper payments may not be representative of the prevalence of improper payments associated with different services within the state. Accordingly, corrective actions designed to rectify specific individual improper payments may not address the most prevalent underlying causes of improper payments. For example, state Medicaid officials in four of our six states said that most improper payments identified through PERM medical reviews are unique one-time events. Federal internal control standards require agencies to identify and analyze program risks so they can effectively respond to such risks, and OMB expects agencies to implement corrective actions that address underlying causes of improper payments. Without estimates that provide information on the most prevalent underlying causes of improper payments within a state, particularly by service type, a state Medicaid agency may not be able to develop appropriate corrective actions or prioritize activities to effectively address program risks. Corrective actions that do not address the underlying causes of improper payments are unlikely to be an effective use of state resources. Increasing sample sizes of the PERM is one approach that could improve the usefulness of the medical reviews for states—but other options also exist. For example, PERM findings could be augmented with data from other sources—such as findings from other CMS program integrity efforts, state auditors, and HHS-OIG reports. States conduct their own program integrity efforts, including medical reviews, to identify improper payments and state Medicaid officials we spoke with in four of our six selected states said that they largely rely on such efforts to identify program risks. One state’s Medicaid officials said that state-led audits allow them to more effectively identify—and subsequently monitor—services that are at risk for improper payments in the state. CMS also could use data from other sources on state-specific program risks to help design states’ PERM samples. These options could help CMS and the states better identify the most prevalent causes of improper payments and more effectively focus corrective actions and program integrity strategies to address program risks. CMS Policy May Limit State Identification of Medicaid Providers Under Fraud Investigation State Medicaid agencies may, but are not required to, determine whether providers included in the PERM sample are under fraud investigation and notify the PERM contractor. Under CMS policy, when a state notifies the PERM contractor of a provider under investigation, the contractor will end all contact with the provider to avoid compromising the fraud investigation, and the claim will be determined to be improper, due to no documentation. In fiscal year 2017, of the 328 Medicaid improper payments due to no documentation, 27 (8 percent) from five states, according to CMS, were because the provider was under fraud investigation. If a state Medicaid agency does not notify the PERM contractor about providers under fraud investigation, the PERM contractor will conduct its medical review, which involves contacting the provider to obtain documentation as a part of its normal process, and communicate about improper payment determinations. Contacting providers that are under fraud investigation as part of PERM reviews could interfere with an ongoing investigation, such as in the following ways we identified based on information from the Association of Certified Fraud Examiners and others. The contact by the PERM contractor to request documentation, although unrelated to the fraud investigation, may give the impression that the provider is under heightened scrutiny. This could prompt the provider to change its behavior, or to destroy, falsify, or create evidence. These actions could in turn disrupt or complicate law enforcement efforts to build a criminal or civil case. The PERM contractor’s communication about improper payment determinations may prompt states to conduct educational outreach to the provider about proper billing procedures. This may inadvertently change the billing practices of a fraudulent provider for whom law enforcement is trying to establish a pattern of behavior. We found that states may not have processes to determine whether providers included in the PERM sample are under fraud investigation. Of the six states we spoke with, officials from two states said they did not have a mechanism in place to identify providers under fraud investigation. However, it is a best practice for investigative and review entities to communicate and coordinate with one another to determine if multiple entities are reviewing the same provider and for investigators to work discreetly without disrupting the normal course of business, based on our analysis of information from the Association of Certified Fraud Examiners and others. Accordingly, investigators should be aware of other government entities that are in contact with providers under investigation, such as the PERM contractor, who may contact providers multiple times to request documentation, and refer identified improper payments for recovery. If multiple entities are reviewing the same provider, one entity may be directed to pause or cease its activities, such as a PERM medical review, to reduce the risk of compromising an active fraud investigation. CMS has stated that it is not the agency’s intention to negatively impact states’ provider fraud investigations and, therefore, it has provided states the option to notify the PERM contractor of any providers under investigation to avoid compromising investigations. However, CMS does not require states to determine whether providers under PERM medical reviews are also under fraud investigation, which creates the potential that PERM reviews could interfere with ongoing investigations. State Medicaid agencies may not have incentives to notify the PERM contractor of providers under fraud investigation, as doing so will automatically result in a no documentation error, which increases states’ improper payment rates. Medicaid officials from one state we spoke with said that while they check whether providers subject to PERM reviews are under investigation for fraud, they do not report these instances to the PERM contractor because the PERM contractor would find a no documentation error and the claim would be cited as improper, increasing the state’s improper payment rate. Officials from another state said this policy penalizes states, in the form of higher state-level improper payment rates that may reflect poorly on states. Additionally, officials from this state were reluctant to develop corrective actions for improper payments stemming from such no documentation errors. Conclusions CMS and states need information about the underlying causes of improper payments to develop corrective actions that will effectively prevent or reduce future improper payments in Medicare and Medicaid FFS. The substantial variation in Medicare and Medicaid estimated improper payment rates for the services we examined raise questions about how well the programs’ documentation requirements ensure that services were rendered in accordance with program coverage policies. While our study focused on certain services with high rates of insufficient documentation, differences in documentation requirements between the programs may apply to other services as well. Without examining how the programs’ differing documentation requirements affect their improper payment rates, CMS’s ability to better identify and address FFS program risks and design strategies to assist providers with meeting requirements may be hindered. At the state level, PERM medical reviews do not provide robust information to individual states. CMS’s requirements to address individual improper payments may lead states to take corrective actions that may not fully address underlying causes of improper payments identified through medical review, and may misdirect state efforts to reduce improper payments. Absent a more comprehensive review of existing sources of information on the underlying causes of Medicaid improper payments, CMS and states are missing an opportunity to improve their ability to address program risks. In addition, the lack of a requirement for state Medicaid agencies to determine whether providers whose claims are selected for PERM medical reviews are also under fraud investigation risks compromising ongoing investigations. Further, citing such claims as improper payments in states’ estimated improper payment rates may discourage state Medicaid agencies from notifying the PERM contractor that a provider is under investigation. Recommendations We are making the following four recommendations to CMS: The Administrator of CMS should institute a process to routinely assess, and take steps to ensure, as appropriate, that Medicare and Medicaid documentation requirements are necessary and effective at demonstrating compliance with coverage policies while appropriately addressing program risks. (Recommendation 1) The Administrator of CMS should take steps to ensure that Medicaid medical reviews provide robust information about and result in corrective actions that effectively address the underlying causes of improper payments. Such steps could include adjusting the sampling approach to reflect state-specific program risks, and working with state Medicaid agencies to leverage other sources of information, such as state auditor and HHS-OIG findings. (Recommendation 2) The Administrator of CMS should take steps to minimize the potential for PERM medical reviews to compromise fraud investigations, such as by directing states to determine whether providers selected for PERM medical reviews are also under fraud investigation and to assess whether such reviews could compromise investigations. (Recommendation 3) The Administrator of CMS should address disincentives for state Medicaid agencies to notify the PERM contractor of providers under fraud investigation. This could include educating state officials about the benefits of reporting providers under fraud investigation, and taking actions such as revising how claims from providers under fraud investigation are accounted for in state-specific FFS improper payment rates, or the need for corrective actions in such cases. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to HHS for comment, and its comments are reprinted in appendix I. HHS also provided us with technical comments, which we incorporated in the report as appropriate. HHS concurred with our first recommendation that CMS institute a process to routinely assess and ensure that Medicare and Medicaid documentation requirements are necessary and effective. HHS stated that CMS’s Patients over Paperwork initiative is focused on simplifying Medicare documentation requirements and noted that for the Medicaid program, CMS will identify and share documentation best practices with state Medicaid agencies. CMS’s Patients over Paperwork initiative may help CMS streamline Medicare documentation requirements. However, we believe CMS should take steps to assess documentation requirements in both programs to better understand the variation in the programs’ requirements and their effect on estimated improper payment rates. Without an assessment of how the programs’ documentation requirements affect estimates of improper payments, CMS may not have the information it needs to ensure that Medicare and Medicaid documentation requirements are effective at demonstrating compliance and appropriately address program risks. HHS did not concur with our second recommendation that CMS ensure that Medicaid medical reviews provide robust information about and result in corrective actions that effectively address the underlying causes of improper payments. HHS noted that increasing the PERM sample size would involve increasing costs and state Medicaid agencies’ burden, and that incorporating other sources of information into the PERM sample design could jeopardize the sample’s statistical validity. HHS also commented that it already uses a variety of sources to identify and take corrective actions to address underlying causes of improper Medicaid payments. We acknowledge that increasing the sample size would increase the costs of the PERM medical review program, though the level of improper payments warrants continued action. Further, under the current approach, we found that CMS and state Medicaid agencies are expending time and resources developing and implementing corrective actions that may not be representative of the underlying causes of improper payments in their states. It is important that corrective actions effectively and efficiently address the most prevalent causes of improper payments, and our report presents options that could improve the usefulness of the PERM’s medical reviews—such as augmenting medical reviews with other sources of information during the development of corrective actions. We continue to believe that corrective actions based on more robust information would help CMS and state Medicaid agencies more effectively address Medicaid program risks. HHS concurred with our third and fourth recommendations that CMS minimize the potential for PERM medical reviews to compromise fraud investigations and address disincentives for state Medicaid agencies to notify the PERM contractor of providers under fraud investigation. In its comments HHS described the actions it has taken and is considering taking to implement these recommendations. We are sending copies of this report to appropriate congressional committees, to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114, or cosgrovej@gao.gov or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Comments from the Department of Health and Human Services Appendix II: Fiscal year 2018 Medicare Improper Payment Data During the period of our review, fiscal year 2017 data represented the most recent, complete data for both Medicare and Medicaid fee-for- service (FFS) estimated improper payment amounts and rates. As of March 2019, the Centers for Medicare & Medicaid Services published the fiscal year 2018 Medicare FFS Supplemental Improper Payment Data report, but had not published the 2018 Medicaid FFS Supplemental Improper Payment Data report. The Centers for Medicare & Medicaid Services estimated Medicare FFS spending of $389 billion, and $32 billion in improper payments. Table 3 below presents updated fiscal year 2018 data for the Medicare improper payment data by the services examined in our report. Appendix III: Selected Examples of Medical Record Templates for Medicare and Medicaid Providers Medicare and state Medicaid agencies have released template medical record documentation, such as certificates of medical necessity and plans of care that providers may use to document information necessary to ensure compliance with coverage policies. This appendix presents examples of such templates. Figure 7 presents a Medicare template that referring physicians can use to certify beneficiary need for home health services. Figure 8 presents a Medicare template that referring physicians can use to certify beneficiary need for home oxygen supplies. Figure 9 presents a template from the Indiana Medicaid program that hospices may use to document beneficiary plans of care. Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts: Staff Acknowledgments In addition to the contact named above, Leslie V. Gordon (Assistant Director), Michael Erhardt (Analyst-in-Charge), Arushi Kumar, and Dawn Nelson made key contributions to this report. Also contributing were Sam Amrhein, Vikki Porter, and Jennifer Rudisill.
Why GAO Did This Study In fiscal year 2017, Medicare FFS had an estimated $23.2 billion in improper payments due to insufficient documentation, while Medicaid FFS had $4.3 billion—accounting for most of the programs' estimated FFS medical review improper payments. Medicare FFS coverage policies are generally national, and the program directly pays providers, while Medicaid provides states flexibility to design coverage policies, and the federal government and states share in program financing. Among other things, GAO examined: (1) Medicare and Medicaid documentation requirements and factors that contribute to improper payments due to insufficient documentation; and (2) the extent to which Medicaid reviews provide states with actionable information. GAO reviewed Medicare and Medicaid documentation requirements and improper payment data for fiscal years 2005 through 2017, and interviewed officials from CMS, CMS contractors, and six state Medicaid programs. GAO selected the states based on, among other criteria, variation in estimated state improper payment rates, and FFS spending and enrollment. What GAO Found The Centers for Medicare & Medicaid Services (CMS) uses estimates of improper payments to help identify the causes and extent of Medicare and Medicaid program risks and develop strategies to protect the integrity of the programs. CMS estimates Medicare and Medicaid fee-for-service (FFS) improper payments, in part, by conducting medical reviews—reviews of provider-submitted medical record documentation to determine whether the services were medically necessary and complied with coverage policies. Payments for services not sufficiently documented are considered improper payments. In recent years, CMS estimated substantially more improper payments in Medicare, relative to Medicaid, primarily due to insufficient documentation (see figure). For certain services, Medicare generally has more extensive documentation requirements than Medicaid. For example, Medicare requires additional documentation for services that involve physician referrals, while Medicaid requirements vary by state and may rely on other mechanisms—such as requiring approval before services are provided—to ensure compliance with coverage policies. Although Medicare and Medicaid pay for similar services, the same documentation for the same service can be sufficient in one program but not the other. The substantial variation in the programs' improper payments raises questions about how well the programs' documentation requirements help identify causes of program risks. As a result, CMS may not have the information it needs to effectively address program risks and direct program integrity efforts. CMS's Medicaid medical reviews may not provide the robust state-specific information needed to identify causes of improper payments and address program risks. In fiscal year 2017, CMS medical reviews identified fewer than 10 improper payments in more than half of all states. CMS directs states to develop corrective actions specific to each identified improper payment. However, because individual improper payments may not be representative of the causes of improper payments in a state, the resulting corrective actions may not effectively address program risks and may misdirect state program integrity efforts. Augmenting medical reviews with other sources of information, such as state auditor findings, is one option to better ensure that corrective actions address program risks. What GAO Recommends GAO is making four recommendations to CMS, including that CMS assess and ensure the effectiveness of Medicare and Medicaid documentation requirements, and that CMS take steps to ensure Medicaid's medical reviews effectively address causes of improper payments and result in appropriate corrective actions. CMS concurred with three recommendations, but did not concur with the recommendation on Medicaid medical reviews. GAO maintains that this recommendation is valid as discussed in this report.
gao_GAO-19-563T
gao_GAO-19-563T_0
Background Federal agencies conduct a variety of procurements that are reserved for small business participation through small business set-asides. These set-asides can be for small businesses in general, or they can be specific to small businesses that meet additional eligibility requirements in the Service-Disabled Veteran-Owned Small Business, Historically Underutilized Business Zone (HUBZone), 8(a) Business Development (8(a)), and WOSB programs. The WOSB program enables federal contracting officers to identify and establish a sheltered market, or set- aside, for competition among women-owned small businesses (WOSB) and economically disadvantaged women-owned small businesses (EDWOSB) in certain industries. WOSBs can receive set-asides in industries in which SBA has determined that women-owned small businesses are substantially underrepresented. To determine these industries, SBA is required to conduct a study to determine which North American Industry Classification System (NAICS) codes are eligible under the program and to report on such studies every 5 years. Additionally, businesses must be at least 51 percent owned and controlled by one or more women who are U.S. citizens to participate in the WOSB program. The owner must provide documents demonstrating that the business meets program requirements, including a document in which the owner attests to the business’s status as a WOSB or EDWOSB. According to SBA, as of early October 2018, there were 13,224 WOSBs and 4,488 EDWOSBs registered in SBA’s online certification database. SBA’s Office of Government Contracting administers the WOSB program by, among other things, promulgating regulations and conducting eligibility examinations of businesses that receive contracts under a WOSB or EDWOSB set-aside. According to SBA, as of October 2018, there were two full-time staff within the Office of Government Contracting whose primary responsibility was the WOSB program. Initially, the program’s statutory authority allowed WOSBs to be self- certified by the business owner or certified by an approved third-party national certifying entity as eligible for the program. Self-certification is free, but some third-party certification options require businesses to pay a fee. Each certification process requires businesses to provide signed representations attesting to their WOSB or EDWOSB eligibility. Businesses must provide documents supporting their status before submitting an offer to perform the requirements of a WOSB set-aside contract. In August 2016, SBA launched certify.sba.gov, which is an online portal that allows firms participating in the program to upload required documents and track their submission and also enables contracting officers to review firms’ eligibility documentation. According to the Federal Acquisition Regulation, contracting officers are required to verify that all required documentation is present in the online portal when selecting a business for an award. In addition, businesses must register and attest to being a WOSB in the System for Award Management, the primary database of vendors doing business with the federal government. In 2011, SBA approved four organizations to act as third-party certifiers. According to SBA data, these four third-party certifiers completed a total of about 3,400 certifications in fiscal year 2017. In 2014 we reviewed the WOSB program and found a number of deficiencies in SBA’s oversight of the four SBA-approved third-party certifiers and in SBA’s eligibility examination processes, and we made related recommendations for SBA. In addition, in 2015 and 2018 the SBA Office of Inspector General (OIG) reviewed the WOSB program and also found oversight deficiencies, including evidence of WOSB contracts set aside for ineligible firms. In both reports, the SBA OIG also made recommendations for SBA. Further, in July 2015, we issued GAO’s fraud risk framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. SBA Has Implemented One of the Three Changes Made by the 2015 NDAA As of early May 2019, SBA had implemented one of the three changes that the 2015 NDAA made to the WOSB program—sole-source authority. The two other changes—authorizing SBA to implement its own certification process for WOSBs and requiring SBA to eliminate the WOSB self-certification option—had not been implemented. The 2015 NDAA did not require a specific time frame for SBA to update its regulations. SBA officials have stated that the agency will not eliminate self-certification until the new certification process for the WOSB program is in place, which they expect to be completed by January 1, 2020. In September 2015, SBA published a final rule to implement sole-source authority for the WOSB program (effective October 2015). Among other things, the rule authorized contracting officers to award a contract to a WOSB or EDWOSB without competition, provided that the contracting officer’s market research cannot identify two or more WOSBs or EDWOSBs in eligible industries that can perform the requirements of the contract at a fair and reasonable price. In the final rule, SBA explained that it promulgated the sole-source rule before the WOSB certification requirements for two reasons. First, the sole-source rule could be accomplished by simply incorporating the statutory language into the regulations, whereas the WOSB certification requirements would instead require a prolonged rulemaking process. Second, SBA said that addressing all three regulatory changes at the same time would delay the implementation of sole-source authority. As of early May 2019, SBA had not published a proposed rule for public comment to establish a new certification process for the WOSB program. Previously, in October 2017, an SBA official stated that SBA was about 1–2 months away from publishing a proposed rule. However, in June 2018, SBA officials stated that a cost analysis would be necessary before the draft rule could be sent to the Office of Management and Budget for review. In response to the SBA OIG recommendation that SBA implement the new certification process, SBA stated that it would implement a new certification process by January 1, 2020. Further, in June 2018, SBA officials told us that they were evaluating the potential costs of a new certification program as part of their development of the new certification rule. On May 3, 2019, SBA officials explained that they expected to publish the proposed rule within a few days. In December 2015, SBA published an advance notice of proposed rulemaking to solicit public comments to assist the agency with drafting a proposed rule to implement a new WOSB certification program. In the notice, SBA stated that it intends to address the 2015 NDAA changes, including eliminating the self-certification option, through drafting regulations to implement a new certification process. The advance notice requested comments on various topics, such as how well the current certification processes were working, which of the certification options were feasible and should be pursued, whether there should be a grace period for self-certified WOSB firms to complete the new certification process, and what documentation should be required. Three third-party certifiers submitted comments in response to the advance notice of proposed rulemaking, and none supported the option of SBA acting as a WOSB certifier. One third-party certifier commented that such an arrangement is a conflict of interest given that SBA is also responsible for oversight of the WOSB program, and two certifiers commented that SBA lacked the required resources. The three third-party certifiers also asserted in their comments that no other federal agency should be allowed to become an authorized WOSB certifier, with one commenting that federal agencies should instead focus on providing contracting opportunities for women-owned businesses. All three certifiers also proposed ways to improve the current system of third-party certification—for example, by strengthening oversight of certifiers or expanding their number. The three certifiers also suggested that SBA move to a process that better leverages existing programs with certification requirements similar to those of the WOSB program, such as the 8(a) program. In the advance notice, SBA asked for comments on alternative certification options, such as SBA acting as a certifier or limiting WOSB program certifications to the 8(a) program and otherwise relying on state or third-party certifiers. SBA Has Not Fully Addressed Deficiencies in Oversight and Program Implementation SBA has not fully addressed deficiencies we identified in our October 2014 report, and these recommendations remain open. First, we reported that SBA did not have formal policies for reviewing the performance of its four approved third-party certifiers, including their compliance with their agreements with SBA. Further, we found that SBA had not developed formal policies and procedures for, among other things, reviewing the monthly reports that certifiers submit to SBA. As a result, we recommended that SBA establish comprehensive procedures to monitor and assess the performance of the third-party certifiers in accordance with their agreements with SBA and program regulations. In response to our October 2014 recommendation, in 2016 SBA conducted compliance reviews of the four SBA-approved third-party certifiers. The compliance reviews included an assessment of the third- party certifiers’ internal certification procedures and processes, an examination of a sample of applications from businesses that the certifiers deemed eligible and ineligible for certification, and an interview with management staff. SBA officials said that SBA’s review team did not identify significant deficiencies in any of the four certifiers’ processes and found that all were generally complying with their agreements. However, one compliance review report described “grave concerns” that a third- party certifier had arbitrarily established eligibility requirements that did not align with WOSB program regulations and used them to decline firms’ applications. SBA noted in the report that if the third-party certifier failed to correct this practice, SBA could terminate the agreement. As directed by SBA, the third-party certifier submitted a letter to SBA outlining actions it had taken to address this issue, among others. In January 2017, SBA’s Office of Government Contracting updated its written Standard Operating Procedures (SOP) to include policies and procedures for the WOSB program, in part to address our October 2014 recommendation. The 2017 SOP discusses what a third-party-certifier compliance review entails, how often the reviews are to be conducted, and how findings are to be reported. The 2017 SOP notes that SBA may initiate a compliance review “at any time and as frequently as the agency determines is necessary.” In March 2019, SBA provided an updated SOP, which includes more detailed information on third-party compliance reviews, such as how SBA program analysts should prepare for the review. However, the updated SOP does not provide specific time frames for how frequently the compliance reviews are to be conducted. In addition, in April 2018, SBA finalized a WOSB Program Desk Guide that discusses how staff should prepare for a compliance review of a third-party certifier, review certification documents, and prepare a final report. In March 2019, SBA provided GAO with an updated WOSB Program Desk Guide that contains information comparable to that in the 2018 version. Both Desk Guides do not describe specific activities designed to oversee third-party certifiers on an ongoing basis. Per written agreements with SBA, third-party certifiers are required to submit monthly reports that include the number of WOSB and EDWOSB applications received, approved, and denied; identifying information for each certified business, such as the business name; concerns about fraud, waste, and abuse; and a description of any changes to the procedures the organizations used to certify businesses as WOSBs or EDWOSBs. In our October 2014 report, we noted that SBA had not followed up on issues raised in the monthly reports and had not developed written procedures for reviewing them. At that time, SBA officials said that they were unaware of the issues identified in the certifiers’ reports and that the agency was developing procedures for reviewing the monthly reports but could not estimate a completion date. In interviews for our March 2019 report, SBA officials stated that SBA still does not use the third-party certifiers’ monthly reports to regularly monitor the program. Specifically, SBA does not review the reports to identify any trends in certification deficiencies that could inform program oversight. Officials said the reports generally do not contain information that SBA considers helpful for overseeing the WOSB program, but staff sometimes use the reports to obtain firms’ contact information. SBA’s updated 2019 SOP includes information on reviews of third-party certifier monthly reports, but it does not contain information on how staff would analyze the reports or how these reports would inform SBA’s oversight of third-party certifiers and related compliance activities, such as eligibility examinations. On May 3, 2019, SBA officials stated that, earlier in the week, they had initiated monthly meetings with the third-party certifiers. SBA officials explained that they intended to continue holding these monthly meetings to discuss best practices and potential issues related to the approval and disapproval of firms and to improve collaboration. Although SBA has taken steps to enhance its written policies and procedures for oversight of third-party certifiers, it does not have plans to conduct further compliance reviews of the certifiers and does not intend to review certifiers’ monthly reports on a regular basis in a way that would inform its oversight activities. SBA officials said that third-party certifier oversight procedures would be updated, if necessary, after certification options have been clarified in the final WOSB certification rule. However, ongoing oversight activities, such as regular compliance reviews, could help SBA better understand the steps certifiers have taken in response to previous compliance review findings and whether those steps have been effective. In addition, leading fraud risk management practices include identifying specific tools, methods, and sources for gathering information about fraud risks, including data on trends from monitoring and detection activities, as well as involving relevant stakeholders in the risk assessment process. Without procedures to regularly monitor and oversee third-party certifiers, SBA cannot provide reasonable assurance that certifiers are complying with program requirements and cannot improve its efforts to identify ineligible firms or potential fraud. Further, it is unclear when SBA’s final rule will be implemented. As a result, we maintain that our previous recommendation should be addressed—that is, that the Administrator of SBA should establish and implement comprehensive procedures to monitor and assess the performance of certifiers in accordance with the requirements of the third-party certifier agreement and program regulations. SBA also has not fully addressed deficiencies we identified in our October 2014 report related to eligibility examinations. We found that SBA lacked formalized guidance for its eligibility examination processes and that the examinations identified high rates of potentially ineligible businesses. As a result, we recommended that SBA enhance its examination of businesses that register for the WOSB program to ensure that only eligible businesses obtain WOSB set-asides. Specifically, we suggested that SBA should take actions such as (1) completing the development of procedures to conduct annual eligibility examinations and implementing such procedures; (2) analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews and determine what actions are needed to address the causes, and (3) implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program. SBA has taken some steps to implement our recommendation, such as including written policies and procedures for WOSB program eligibility examinations in an SOP and a Desk Guide. However, SBA does not collect reliable information on the results of its annual eligibility examinations. According to SBA officials, SBA has conducted eligibility examinations of a sample of businesses that received WOSB program set-aside contracts each year since fiscal year 2012. However, SBA officials told us that the results of annual eligibility examinations—such as the number of businesses found eligible or ineligible—are generally not documented. As a result, we obtained conflicting data from SBA on the number of examinations completed and the percentage of businesses found to be ineligible in fiscal years 2012 through 2018. For example, based on previous information provided by SBA, we reported in October 2014 that in fiscal year 2012, 113 eligibility examinations were conducted and 42 percent of businesses were found to be ineligible for the WOSB program. However, during our more recent review, we received information from SBA indicating that 78 eligibility examinations were conducted and 37 percent of businesses were found ineligible in fiscal year 2012. In addition, SBA continues to have no mechanism to look across examinations for common eligibility issues to inform the WOSB program. As we noted in 2014, by not analyzing examination results broadly, the agency is missing opportunities to obtain meaningful insights into the program, such as the reasons many businesses are deemed ineligible. Further, SBA still conducts eligibility examinations only of firms that have already received a WOSB award. In our October 2014 report, we concluded that this sampling practice restricts SBA’s ability to identify potentially ineligible businesses prior to a contract award. SBA officials said that while some aspects of the sample characteristics have changed since 2012, the samples still generally consist only of firms that have been awarded a WOSB set-aside. Restricting the samples in this way limits SBA’s ability to better understand the eligibility of businesses before they apply for and are awarded contracts, as well as its ability to detect and prevent potential fraud. We recognize that SBA has made some effort to address our previous recommendation by documenting procedures for conducting annual eligibility examinations of WOSB firms. However, leading fraud risk management practices state that federal program managers should design control activities that focus on fraud prevention over detection and response, to the extent possible. Without maintaining reliable information on the results of eligibility examinations, developing procedures for analyzing results, and expanding the sample of businesses to be examined to include those that did not receive contracts, SBA limits the value of its eligibility examinations and its ability to reduce ineligibility among businesses registered to participate in the WOSB program. These deficiencies also limit SBA’s ability to identify potential fraud risks and develop any additional control activities needed to address these risks. As a result, the program may continue to be exposed to the risk of ineligible businesses receiving set-aside contracts. In addition, in light of these continued deficiencies, the implementation of sole-source authority without addressing the other changes made by the 2015 NDAA could increase program risk. For these reasons, we maintain that our previous recommendation that SBA enhance its WOSB eligibility examination procedures should be addressed. SBA has also not addressed previously identified issues with WOSB set- asides awarded under ineligible industry codes. In 2015 and 2018, the SBA OIG reported instances in which WOSB set-asides were awarded using NAICS codes that were not eligible under the WOSB program, and our analysis indicates that this problem persists. Specifically, our analysis of data from the Federal Procurement Data System–Next Generation (FPDS–NG) on all obligations to WOSB program set-asides from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018 found the following: 3.5 percent (or about $76 million) of WOSB program obligations were awarded under NAICS codes that were never eligible for the WOSB program; 10.5 percent (or about $232 million) of WOSB program obligations made under an EDWOSB NAICS code went to women-owned businesses that were not eligible to receive awards in EDWOSB- eligible industries; and 17 of the 47 federal agencies that obligated dollars to WOSB program set-asides during the period used inaccurate NAICS codes in at least 5 percent of their WOSB set-asides (representing about $25 million). According to SBA officials we spoke with, WOSB program set-asides may be awarded under ineligible NAICS codes because of human error when contracting officers are inputting data in FPDS–NG or because a small business contract was misclassified as a WOSB program set-aside. Rather than review FPDS–NG data that are inputted after the contract is awarded, SBA officials said that they have discussed options for working with the General Services Administration to add controls defining eligible NAICS codes for WOSB program set-aside opportunities on FedBizOpps.gov—the website that contracting officers use to post announcements about available federal contracting opportunities. However, SBA officials said that the feasibility of this option was still being discussed and that the issue was not a high priority. Additionally, as of November 2018, the WOSB program did not have targeted outreach or training that focused on specific agencies’ use of NAICS codes, and SBA officials did not identify any targeted outreach or training provided to specific agencies to improve understanding of WOSB NAICS code requirements (or other issues related to the WOSB program). On May 6, 2019, an SBA official provided information that SBA has initiated a review to determine federal agencies’ use of ineligible NAICS codes and that SBA plans to share the findings with agencies and also provide training to procurement center representatives. Congress authorized SBA to develop a contract set-aside program specifically for WOSBs and EDWOSBs to address the underrepresentation of such businesses in specific industries. In addition, federal standards for internal control state that management should design control activities to achieve objectives and respond to risks, and that management should establish and operate monitoring activities to monitor and evaluate the results. Because SBA does not review whether contracts are being awarded under the appropriate NAICS codes, it cannot provide reasonable assurance that WOSB program requirements are being met or identify agencies that may require targeted outreach or additional training on eligible NAICS codes. As a result, WOSB contracts may continue to be awarded to groups other than those intended, which can undermine the goals of and confidence in the program. Federal Contracts to WOSB Set-Asides Remain Relatively Small While federal contract obligations to all women-owned small businesses and WOSB program set-asides have increased since fiscal year 2012, WOSB program set-asides remain a small percentage. Specifically, federal dollars obligated for contracts to all women-owned small businesses increased from $18.2 billion in fiscal year 2012 to $21.4 billion in fiscal year 2017. Contracts awarded to all women-owned small businesses within WOSB-program-eligible industries also increased during this period—from about $15 billion to $18.8 billion, as shown in figure 1. However, obligations under the WOSB program represented only a small share of this increase. In fiscal year 2012, WOSB program contract obligations were 0.5 percent of contract obligations to all women- owned small businesses for WOSB-program-eligible goods or services (about $73.5 million), and in fiscal year 2017 this percentage had grown to 3.8 percent (about $713.3 million) (see fig. 1). In summary, the WOSB program aims to enhance federal contracting opportunities for women-owned small businesses. However, as of early May 2019, SBA had not fully implemented comprehensive procedures to monitor the performance of the WOSB program’s third-party certifiers and had not taken steps to provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts, as recommended in our 2014 report. Without ongoing monitoring and reviews of third-party certifier reports, SBA cannot ensure that certifiers are fulfilling their requirements, and it is missing opportunities to gain information that could help improve the program’s processes. Further, limitations in SBA’s procedures for conducting and analyzing eligibility examinations inhibit its ability to better understand the eligibility of businesses before they apply for and potentially receive contracts, which exposes the program to unnecessary risk of fraud. Also, since SBA does not expect to finish implementing the changes in the 2015 NDAA until January 1, 2020, these continued oversight deficiencies increase program risk. As a result, we maintain that our previous recommendations should be addressed. In addition, SBA has not addressed deficiencies related to WOSB program set-asides being awarded under ineligible industry codes. Although SBA has updated its training and outreach materials for the WOSB program to address NAICS code requirements, it has not developed a process for periodically reviewing FPDS–NG data, and has yet to provide targeted outreach or training to agencies that may be using ineligible codes. As a result, SBA is not aware of the extent to which individual agencies are following program requirements and which agencies may require targeted outreach or additional training. Reviewing FPDS–NG data would allow SBA to identify those agencies (and contracting offices within them) that could benefit from such training. Without taking these additional steps, SBA cannot provide reasonable assurance that WOSB program requirements are being met. As such, we made one recommendation in our March 2019 report to SBA. We recommended that SBA develop a process for periodically reviewing FPDS–NG data to determine the extent to which agencies are awarding WOSB program set-asides under ineligible NAICS codes, and take steps to address any issues identified, such as providing targeted outreach or training to agencies making awards under ineligible codes. As of May 2019, this recommendation remains open. Chairman Golden, Ranking Member Stauber, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Acknowledgments If you or your staff have any questions about this testimony, please contact William Shear, Director, Financial Markets and Community Investment at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Andrew Pauline (Assistant Director), Tarek Mahmassani (Analyst in Charge), and Jennifer Schwartz. Other staff who made key contributions to the report cited in the testimony were Allison Abrams, Pamela Davidson, Jonathan Harmatz, Tiffani Humble, Julia Kennon, Rebecca Shea, Jena Sinkfield, Tyler Spunaugle, and Tatiana Winger. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study In 2000, Congress authorized the WOSB program, allowing contracting officers to set aside procurements to women-owned small businesses in industries in which they are substantially underrepresented. To be eligible to participate in the WOSB program, firms have the option to self-certify or be certified by a third-party certifier. However, the 2015 NDAA changed the WOSB program by (1) authorizing SBA to implement sole-source authority, (2) eliminating the option for firms to self-certify as being eligible for the program, and (3) allowing SBA to implement a new certification process. This testimony is based on a report GAO issued in March 2019 ( GAO-19-168 ). For that report, GAO examined (1) the extent to which SBA has addressed the 2015 NDAA changes, (2) SBA's efforts to address previously identified deficiencies, and (3) use of the WOSB program. GAO reviewed relevant laws, regulations, and program documents; analyzed federal contracting data from April 2011 through June 2018; and interviewed SBA officials, officials from contracting agencies selected to obtain a range of experience with the WOSB program, and the three (out of four) private third-party certifiers that agreed to meet with GAO. What GAO Found The Small Business Administration (SBA) has implemented one of the three changes to the Women-Owned Small Business (WOSB) program authorized in the National Defense Authorization Act of 2015 (2015 NDAA). In September 2015 SBA published a final rule to implement sole-source authority (to award contracts without competition), effective October 2015. As of early May 2019, SBA had not eliminated the option for program participants to self-certify that they are eligible to participate, as required by the 2015 NDAA. SBA officials stated that the agency intended to address the third change made by the 2015 NDAA (meaning implement a new certification process for the WOSB program). SBA has not addressed WOSB program oversight deficiencies and recommendations in GAO's 2014 report ( GAO-15-54 ). For example, GAO recommended that SBA establish procedures to assess the performance of four third-party certifiers—private entities approved by SBA to certify the eligibility of WOSB firms. While SBA generally agreed with GAO's recommendations and conducted a compliance review of the certifiers in 2016, it has no plans to regularly monitor their performance. By not improving its oversight, SBA is limiting its ability to ensure third-party certifiers are following program requirements. Further, the implementation of sole-source authority in light of these continued oversight deficiencies can increase program risk. GAO maintains that its recommendations aimed at improving oversight should be addressed. In addition, GAO's March 2019 ( GAO-19-168 ) report found that about 3.5 percent of contracts using a WOSB set-aside were awarded for ineligible goods or services from April 2011 through June 2018. At that time, SBA was not reviewing contracting data that could identify which agencies may need targeted training. GAO recommended that SBA review such data to help address identified issues. In early May 2019, SBA said it had initiated such efforts. While federal contract obligations to all women-owned small businesses and WOSB program set-asides have increased since fiscal year 2012, WOSB program set-asides remain a small percentage (see figure). Note: Obligations to women-owned small businesses represent contract obligations to women-owned small businesses under WOSB-program-eligible North American Industry Classification System codes. FPDS-NG obligation amounts have been adjusted for inflation. What GAO Recommends GAO recommended in March 2019 that SBA develop a process for periodically reviewing the extent to which WOSB program set-asides are awarded for ineligible goods or services and use the results to address identified issues, such as through targeted outreach or training on the WOSB program. SBA agreed with the recommendation.
gao_GAO-19-547
gao_GAO-19-547_0
Background Roles and Responsibilities for State and DHS Components Several State and DHS components have roles and responsibilities in the E-2 adjudication process, as shown in Table 1. Depending on which agency (State or USCIS) is conducting the E-2 adjudication, as well as the foreign national’s role in relation to the E-2 business, foreign nationals are described using various terms, as shown in table 2. E-2 Eligibility Requirements Both the business and foreign national seeking E-2 status must meet specific eligibility requirements, as shown in table 3. The E-2 eligibility requirements for nationals of treaty countries and their qualified family members (i.e., dependents) are defined in the INA, as amended, as well as in federal regulation. Foreign nationals seeking E-2 status must provide evidence and supporting documentation to State’s consular officers or USCIS’s immigration officers showing that they and their related business meet these requirements. E-2 Nonimmigrant Adjudication Processes There are two pathways for an individual seeking E-2 status: (1) applying for an E-2 visa through State at a post abroad, and then being inspected and admitted at a U.S. port of entry by CBP, or (2) filing with USCIS to extend, or change to E-2 status if already in the United States in E-2 or other nonimmigrant status, as shown in figure 1. Prior to the expiration of the 2-year period typical for E-2 nonimmigrants, a foreign national seeking to remain in E-2 status must either petition USCIS for an E-2 extension; or depart the country, reapply for an E-2 visa with State at a U.S. embassy or consulate, and seek entry at a U.S. port of entry. However, if the E-2 visa is still valid after having departed, the foreign national may present that visa to apply for admission again at a U.S. port of entry. If applying through State, consular officers are responsible for adjudicating E-2 visa applications at one of State’s 220 posts. Although all posts can adjudicate E-2 visas, approximately 140 posts adjudicated at least one E-2 visa in fiscal year 2018. State and USCIS Adjudicated About 54,000 E-2 Visa Applications or Petitions Per Year From Fiscal Years 2014 through 2018; Roles, Business Sectors, and Countries Varied Taken together, State and USCIS adjudicated an annual volume of E-2 visa applications or petitions of more than 50,000 from fiscal years 2014 through 2018. State accounted for over 80 percent of these adjudications. About 90 percent of State’s E-2 visa applications were issued, and about 83 percent of USCIS’s E-2 petitions were approved. See appendix III for additional State and USCIS data on the characteristics of foreign nationals seeking E-2 status, including annual statistics, the relatively low number of E-2 nonimmigrants who remain in the United States beyond the conclusion of their authorized period of stay (i.e., overstay), and other post-adjudication outcomes. State Adjudicated About 45,000 E-2 Visas Annually, About 90 Percent of Which Were Issued The volume of State’s E-2 visa adjudications increased from fiscal years 2014 through 2017, and decreased slightly in fiscal year 2018, as shown in figure 2. During this time period, State consular officers adjudicated an average of about 45,000 E-2 visas per year. Also during this time period, 44 percent of adjudications were for dependents, and a combined 53 percent were for principals, including 14 percent for the investor, 20 percent for managers, and 19 percent for essential employees. From fiscal years 2014 through 2017, the average E-2 visa refusal rate— that is, the number of refused visas divided by the total number of visas adjudicated during that time period—was about 8 percent, which is generally lower than for other types of nonimmigrant visas (see sidebar). We do not present the fiscal year 2018 refusal rate in figure 3 because that rate is subject to change until the end of fiscal year 2019. Specifically, an application adjudicated in fiscal year 2018 may require the applicant to submit additional information to demonstrate eligibility for an E-2 visa. In such cases, the application is refused under INA § 221(g). The applicant has one year after the date of refusal to overcome the refusal by, for example, providing missing or supplemental information. After one year, the applicant must reapply. As of November 2018, 8,184 of the 11,255 refusals in fiscal year 2018 were refused under INA § 221(g). Depending on the extent to which applicants refused in fiscal year 2018 under INA § 221(g) are able to overcome their refusals, State officials stated that they expected the fiscal year 2018 refusal rate to be similar to prior fiscal years. In addition to analyzing State data on adjudications and refusals, we also analyzed data to identify trends in refusal rates by applicant type, refusal reasons, nationality of applicants, and business sectors, and level of investment, as described below. Refusal Rates by Applicant Type. Our analysis showed that for fiscal years 2014 through 2018, average refusal rates were highest for investors (24 percent), followed by dependents (12 percent), managers (9 percent), and essential employees (6 percent). Figure 4 shows the refusal rates by fiscal year for each applicant type, and appendix III includes additional information on refusal rates for fiscal year 2018. According to State officials, refusal rates may be higher for investors because such applicants are typically the first in their company applying for an E-2 visa; if denied, then future E-2 applicants (e.g., manager or essential employee) would need to wait until such investor is approved or find another individual or business investor to form the basis for their E-2 employment status. Refusal Reasons. Our analysis showed that approximately 10 percent of E-2 visa adjudications from fiscal years 2014 through 2017 were refused. The majority of E-2 visa refusals for fiscal years 2014 through 2017 (75 percent) were because the applicant did not meet eligibility requirements. The next largest reason for refusal (22 percent) was INA § 221(g) for inadequate documentation. Few E-2 visa applicants are refused for other reasons, such as prior immigration violations, fraud, or terrorist activities. For example, in total, less than 4 percent of all E-2 visa adjudications during this time period were refused for other reasons, such as security or criminal-related ineligibilities, fraud or misrepresentation, and immigration violations, among others. Nationality. Our analysis showed that about 80 percent of E-2 visa adjudications from fiscal years 2014 through 2018 were for nationals from nine countries: five European countries (Germany, France, United Kingdom, Italy, and Spain), two Asian countries (Japan and South Korea), and two North American countries (Canada and Mexico). Japan was the largest country of nationality, with 29 percent, followed by Germany (10 percent), Canada (7 percent), and France (7 percent). Figure 5 shows the top ten countries by percentage of E-2 visa adjudications from fiscal years 2014 through 2018. Business Sectors. To obtain information on additional characteristics of E-2 visa principal applicants (i.e., investor, manager, and essential employee), such as their business sector and investment amounts, we reviewed a generalizable sample of 120 fiscal year 2018 E-2 visa applications. Based on our analysis, we estimate that about three- fourths of principal E-2 visa applicants were associated with 4 business sectors: manufacturing (44 percent), food services (13 percent), retail (11 percent), and professional services (10 percent). Figure 6 includes examples of the businesses we found within each of these sectors. Investment. Based on information reported by fiscal year 2018 principal applicants in our generalizable sample of issued visas, we estimate 64 percent of applications were for principal applicants associated with investments reportedly over $10 million, as shown in figure 7. Of these, 30 of 40 applications were for those in the manufacturing sector, particularly for the automotive sector, such as large automobile manufacturers. USCIS Adjudicated an Average of About 9,400 E- 2 Petitions Annually, 83 Percent of Which Were Approved From fiscal year 2014 through 2018, USCIS adjudicated an average of about 9,400 E-2 petitions per year. During this time period, USCIS adjudicated petitions to extend E-2 status for an average of about 5,900 beneficiaries per year, about 60 percent of which were for E-2 dependents (i.e., an E-2 principal’s spouse or children). Also during the same time period, USCIS adjudicated petitions for an average of about 3,500 beneficiaries per year who were seeking to change to E-2 status from another nonimmigrant category. Of these, about 47 percent of which were E-2 principal beneficiaries (i.e., investors, managers, and essential employees). Figure 8 shows the number of petitions to extend or change to E-2 status from fiscal years 2014 through 2018. The average denial rate for E-2 petitions for fiscal years 2014 through 2018 was about 17 percent. Denial rates were higher for petitions to change status from another nonimmigrant category to E-2 (27 percent) than for petitions to extend E-2 status (11 percent), as shown in figure 9. Further, the denial rate for both extension and change of status petitions increased from fiscal years 2014 through 2017, but fell by several points in 2018. In addition to analyzing USCIS data on adjudications and denials, we also analyzed data to identify trends in country of birth, prior status, date of last U.S. entry, reasons for denial, business sectors, and level of investment, as described below. Country of Birth. Our analysis showed that the top countries of birth for individuals seeking to extend their E-2 status from fiscal years 2014 through 2018 were South Korea, Mexico, and Japan, and the top countries of birth for those seeking to change to E-2 status from another nonimmigrant category were South Korea, Pakistan, and Turkey, as shown in table 4. Although there are similarities with the top countries of nationality for State E-2 visas (see previous figure 5), there are some differences as well. For example, both Pakistan and Thailand are among the top countries of birth for petitioning with USCIS to extend or change to E-2 status, but are not among the top countries of nationality for State E-2 visas. Prior status. Our analysis showed that individuals seeking to change to E-2 status from another nonimmigrant category from fiscal years 2014 through 2018 were most often changing status from a tourist, business, or student visa, as shown in figure 10. For example, more than half (53 percent) of all petitions to change to E-2 status were for beneficiaries that were tourists (B-2) or business visitors (B-1). In addition, about 4 percent of beneficiaries were seeking to change status within the E-2 classification. For example, a child or spouse of an E-2 investor may later work at the company as a manager and therefore would need to petition to change from dependent to principal E-2 status as a manager. Date of last entry into the United States. On the basis of our review of a generalizable sample of petitions of E-2 principals (i.e., investors, managers, and essential employees), we estimate that one third of principal beneficiaries had been in the United States since 2014 or earlier at the time they sought to change to or extend E-2 status in 2018, some as long as 18 years, as shown in figure 11. Such beneficiaries may have changed status from other kinds of nonimmigrant status, or may have requested to extend their E-2 status multiple times. There is no limit on the number of times a foreign national may request to extend their E-2 status. Reason for denial. On the basis of our review of a generalizable sample of fiscal year 2018 denied petitions for E-2 principals, we estimate that the top reasons petitions were denied included (1) the enterprise was not real and operating, and (2) the investment was not substantial, as shown in table 5. Of the denied petitions in fiscal year 2018, about one-third were either withdrawn by petitioner or abandoned, meaning that the petitioner did not respond to USCIS requests for additional evidence. Business Sectors. On the basis of our review, we estimate that the majority of E-2 principal beneficiaries were associated with 4 business sectors, as shown in figure 12: food services (38 percent), retail (18 percent), manufacturing (9 percent), and professional services (13 percent). Comparing our two generalizable samples, a smaller percentage of USCIS’s E-2 principal beneficiaries were associated with manufacturing (44 versus 9 percent) and more with food services (13 versus 38 percent) than State’s E-2 principal visa applicants. Investment. We estimate that about two-thirds of the approved petitions were for principal beneficiaries associated with investments of $200,000 or less, as shown in figure 13. We found that about 30 percent of USCIS’s E-2 principal beneficiaries were associated with investment amounts of $100,000 or less and 7 percent were associated with investments over $10 million. State and USCIS Have E-2 Guidance and Procedures, But Officials Identified Challenges with Respect to E-2 Adjudication State and USCIS have agency-specific guidance, procedures, and training intended to ensure E-2 applicants and petitioners, respectively, meet E-2 eligibility requirements. However, officials from both agencies identified challenges in the E-2 adjudication process. Some of State’s posts have developed E-2 company registration programs to help streamline the E-2 adjudication process, but there are no minimum standards for these programs, which may result in different processing of companies and applicants across posts. Further, State and USCIS require that consular and immigration officers retain certain documentation for all E-2 applications and petitions; however, during our case file review of E-2 applications and petitions adjudicated in fiscal year 2018, we found that State did not consistently retain all required documents. State and USCIS Have Agency-Specific Guidance and Resources, Procedures, and Training State and USCIS have guidance and resources to help officers adjudicate E-2 applications and petitions. Both agencies have similar high-level procedures for adjudicating E-2 applications and petitions, but there are some key differences in how each agency implements these procedures based on their specific roles and responsibilities. Further, both agencies provide their staff with some training on E-2 eligibility requirements. Guidance and resources. State and USCIS have guidance and resources available to staff who adjudicate E-2 visas and petitions to help ensure that applicants and petitioners meet E-2 eligibility requirements. Although the guidance documents have some minor differences, they are based on the same eligibility requirements. For example, the main guidance documents for State and USCIS—State’s Foreign Affairs Manual (FAM) and USCIS’s national E-visa standard operating procedures—both include the same eligibility criteria and provide additional explanation on each of the eligibility requirements. State also provides supplementary resources for consular officers on its intranet, such as E-2 adjudication best practices, an adjudication guide, and case studies. State and USCIS both provide headquarters-based legal advisors and attorneys with whom officers can consult for case-specific guidance. For example, a State consular officer at one post we visited told us that he requested such assistance for an application from an investor whose company had a particularly complex ownership structure that made it difficult to determine if at least 50 percent of the company was owned by nationals of a treaty country. Adjudication procedures. State and USCIS high-level procedures for adjudicating E-2 applications and petitions are generally similar, but there are some key differences based on their specific roles and responsibilities. As shown in figure 14, both agencies require foreign nationals to submit an E-2 application or petition, and pay any relevant fees. Additionally, both agencies vet individuals by conducting security checks and reviewing submitted information to ensure that all E-2 eligibility requirements are met. There are four key differences in State and USCIS procedures for adjudicating E-2 visa applications and petitions: Interviews. State requires in-person interviews of most E-2 applicants. According to USCIS officials, USCIS does not conduct interviews of beneficiaries and petitioners because they do not have the resources or facilities to do so. In any case, USCIS’s process for adjudicating nonimmigrant visa petitions for foreign nationals who have already been lawfully admitted into the United States, in E-2 or other nonimmigrant status does not include an interview requirement. Locally Employed Staff (LES) and E-2 Visa Adjudication Consular officers and managers stated that LES play an important role in E-2 visa processing and adjudication. LES are employees hired under the local compensation plan at a U.S. post overseas. LES include foreign service nationals, U.S. citizens residing abroad, third country nationals, and eligible family members of State employees. LES can provide the institutional knowledge and expertise in E-2 visa issues, as consular officers rotate posts every 2 years but LES do not rotate. Consular managers at 4 of the 14 posts we interviewed or visited stated that their post specifically hired LES to work on E-2 visas because of their specialized knowledge and backgrounds in business or law. For example, a consular officer may consult with LES on an application to better understand the legal relationship between two companies, as some LES have a background or developed expertise in financial law. Locally Employed Staff (LES) initial processing and prescreening. In addition to consular officers, State employs local residents in its host country to help with consular services (see sidebar). For example, at some posts State’s LES prescreen visa applications before consular officers adjudicate the application. Procedures for LES varied at the posts we interviewed and visited. For example, LES at some posts provide administrative help and processing—such as scanning application documents, checking applications for completeness, and scheduling interviews. LES at other posts provide additional analytical support—such as by summarizing applications, completing eligibility checklists, and maintaining databases on previously issued E-2 visas. Regardless of the kind of help LES may provide at post, only consular officers adjudicate E-2 visa applications and make decisions on whether or not the visa is issued. The number of LES supporting E-2 visa applications at the 14 posts we visited or interviewed ranged from one part-time position to five full-time LES. Consular managers and officers at all four of the posts we visited described the role of LES in processing E-2 visas as critical (see sidebar). Although USCIS’ California Service Center has staff who assist with processing petitions, such as by organizing folders with the petition materials, immigration officers generally perform the analytical tasks themselves. Staffing model. Depending on E-2 visa application volume, staffing considerations, and workload arrangements, the number of consular officers adjudicating E-2 visas at the 14 posts abroad we interviewed ranged from one to six per post. Further, on the basis of our observations and interviews with consular officials at 14 posts, we found that State’s posts have generally developed three different staffing models for adjudicating E-2 visa applications, as shown in table 6. Consular managers stated that the kind of model used at a post may depend on E-2 visa volume, as well as other factors. For example, a consular manager at a post we visited explained that the specialist model worked well at his post because it had a relatively low volume of E-2 adjudications each year, which meant that a single officer could focus on such visas. In contrast, a consular manager at a post we visited that was staffed with a hybrid of generalists and specialists had higher E-2 visa volume and stated that their model allowed them to balance efficiency and specialization. For USCIS, a specialized office of five immigration officers review and adjudicate all E-petitions (including E-1 and E-2) at one location –USCIS’ California Service Center, as of July 2018. Training. State and USCIS provide training to their respective E-2 processing and adjudication staff on E-2 eligibility requirements. State’s consular officers assigned to adjudicate E-2 visas receive the majority of their adjudication training at post, with a brief introduction to E-2 visas during a mandatory 6-week Foreign Service Institute training course taken prior to serving as a consular officer overseas. According to Foreign Service Institute officials, the course provides consular officers with an overview of the various visa classes they may adjudicate, but focuses on visas that all consular officers will address at post. Because E-2 visas are not adjudicated at every post, and consular officers typically cannot specialize in only one particular classification like USCIS counterparts who have a dedicated E-2 unit, the course does not concentrate on that visa classification. Instead, State relies on the individual posts to provide training to prepare consular officers to adjudicate E-2 visas on an “as needed” basis. On the basis of our interviews and observations, we found that E-2 training programs for consular officers at post generally consist of three components. First, consular managers and senior consular officers at post provide the consular officer who will be adjudicating E-2 visa applications for the first time with an overview of the E-2 eligibility requirements along with any supplementary E-2 training resources, such as illustrative examples of challenging E-2 visa cases the post has previously adjudicated. Second, new consular officers are to observe senior consular officers adjudicate E-2 visas for 1 to 3 weeks, which helps the new officer to learn how the requirements are applied. Finally, new officers adjudicate E-2 visas under the supervision of a senior consular officer with experience adjudicating E-2 visa applications, with 100 percent of their adjudications reviewed by consular managers until management determines that the new officer is proficient. As needed, supervisors will meet with new officers to discuss specific adjudications, including whether the officer properly documented their decision. State’s E-2 training for LES is entirely at post. According to consular managers and LES, LES training generally consists of a review of eligibility requirements and supervision. First, new LES assigned to E-2 visa processing and prescreening receive an overview of the E-2 eligibility requirements from a senior LES. According to LES we interviewed, the overview of the eligibility requirements helps them to identify the types of documents E-2 applicants typically submit to establish E-2 eligibility. Second, new LES are observed by senior LES until management determines that the LES is proficient at processing and prescreening. As noted above, USCIS has staff dedicated to E-2 petitions and USCIS provides training to new E-2 immigration officers that include the same basic components as State, such as a review of eligibility requirements and job shadowing. First, immigration officers who will work on E-2 adjudications receive 3 weeks of classroom training during which they review the E-2 eligibility requirements. The classroom training is followed by a 1-week practicum session where USCIS immigration officers apply the classroom training to sample E-2 petitions. Specifically, immigration officers explained to us that during the practicum they are given example cases to which they are to apply their classroom training. After each officer has adjudicated the example case, they discuss how each applied the various E-2 eligibility requirements and reconcile any differences with the assistance of the immigration supervisor facilitating the training. Second, after the 4 weeks of training, USCIS immigration officers begin to adjudicate E-2 petitions under the guidance of an E-2 immigration supervisor. Third, new E-2 immigration officers have 100 percent of their cases reviewed by their supervisor until they are deemed proficient. State and USCIS Officials Identified Challenges in the E-2 Adjudication Process and State Officials Identified the Need for Additional Training State’s consular officers and LES, as well as USCIS officials, stated that given the complexity of adjudicating E-2 applications and petitions, and the level of documentation and time required, the E-2 adjudication process can present challenges with respect to the analysis of the E-2 eligibility requirements. Consular officers and LES we spoke with stated that additional training on E-2 eligibility requirements would be beneficial. USCIS officials said that while E-2 petitions can be challenging to adjudicate, additional training was not necessary. State Officials Identified Challenges and Training Needed for Adjudicating E-2 Visa Applications Consular officers we spoke with noted that E-2 visa adjudications are particularly complicated and resource-intensive, involving potentially complex business issues, and often requiring more documentation and time to adjudicate than is typically needed to adjudicate other visas. Specifically, consular officers at 10 of 14 posts we interviewed stated that E-2 visas are among the most difficult nonimmigrant visas to adjudicate because of the amount of supporting documentation that is required to demonstrate that both the business and applicant meet all eligibility requirements, as well as the time required to prescreen and adjudicate the application package. For example, E-2 application packages can include 200 pages or more of supporting documentation, and include a range of detailed business and financial documents (see sidebar). Further, consular officers told us that it can take between 45 minutes to 4 hours to review a single E-2 application with its supporting documents. Consular officers explained that, in contrast, other nonimmigrant visa categories do not require the same amount of time or number of documents to adjudicate. For example, business and tourism nonimmigrant visas typically take less than 10 minutes to adjudicate and do not require that any documentation be submitted by the applicant prior to the adjudication. Consular officers at the 14 posts we visited or interviewed identified challenges with respect to the analysis of the E-2 eligibility requirements. Table 7 provides examples of some of these challenges, as identified by consular officers at the 14 posts. Substantial investment requirement: No prescribed minimum amount of capital, although it must be substantial in proportion to the cost of the business. Sufficient to ensure the investor’s financial commitment to the successful operation of the business. Large enough to ensure the likelihood of success of the business. Determining substantial investment. Consular officers at 10 of 14 posts indicated that it can be challenging to determine substantiality of capital investment amounts. According to the FAM, there is no set amount of capital which is considered substantial; instead, various factors must be considered to ensure there is a large enough investment to support the business. Consular officers noted that it can be difficult to determine how much capital is needed to support the many types of businesses that consular officers see in E-2 applications, which can range from small restaurants to technology start-ups to large automobile manufacturers. For example, a consular officer may be presented with an application for an investor seeking an E-2 visa to open a business that the consular officer has never seen before in an E-2 visa application, such as an airport internet café that rents hourly sleeping pods to travelers on long layovers. The consular officer may be initially unfamiliar with what is considered to be a more unique type of business, and may not know immediately how much investment would be sufficient to ensure the successful operation of the business. In such cases, the officer might gather additional information from the applicant on similar businesses, which the officer could use to inform their determination as to the amount of capital that would be needed to support successful operation of the business in the United States. Real and operating business requirement: The business is a real and active commercial or entrepreneurial undertaking that produces goods (i.e. commodities) or services for profit, and meets applicable legal requirements for doing business in the particular jurisdiction of the United States. Determining real and operating business. Consular officers at 7 of 14 posts indicated that it can be challenging to determine whether the business is real and operating. Consular officers explained that particularly difficult issues may arise for new businesses, which may not be operational yet at the time of the interview. Consular officers stated that it can be very clear when a business is not yet operating, but that additional analysis is required for newly-formed businesses that do not yet have customers or revenue but may have taken other actions to start the business. Consular officers at one post explained it is sometimes very clear that a business is not operating because, for example, the business has not yet made any contracts with clients, does not have a website advertising its services, and has no evidence of any expenses made on behalf of the business. As for newly-formed businesses, consular officers at another post we visited provided a hypothetical example of a restaurant whose owner had a lease for the restaurant space, bought equipment, and hired employees, but had not opened to customers yet because it was waiting for the chef to receive an E-2 visa as an essential employee. The officers indicated that in such a hypothetical scenario in which a business’s qualification as an E-2 business depends on E-2 visa issuance of a key worker, it may not be immediately clear without further analysis, whether such business would be considered real and operating. Manager requirement: The individual is an employee in an executive or supervisory position. Determining manager qualifications. Consular officers at 6 of 14 posts indicated that it can be challenging to determine whether a prospective manager had or will have sufficient executive or supervisory duties to meet the E-2 managerial requirement. Consular officers provided a hypothetical example in which a consular officer may interview an applicant seeking an E-2 visa to become a manager at a restaurant, but the applicant may not have any prior management experience nor will she have any subordinates in the restaurant. Such a situation may pose challenges to the consular officer to determine if the applicant would be eligible for an E-2 visa as a manager. Officers noted that the FAM requirements did not specifically state that the applicant must have prior experience or subordinates to qualify as a manager. In such situations, consular officers said they might request additional information from the applicant about the restaurant, her skills and experience, and the nature of her managerial role in the business. Essential employee requirement: The individual is employed in a lesser capacity than a manager, but possesses special qualifications (i.e. skills and/or aptitudes) essential to the business’ successful or efficient operations in the United States. Determining essential employee qualifications. Consular officers at 6 of 14 posts indicated that it can be challenging to determine whether a prospective essential employee has special qualifications (i.e. essential skills or aptitudes). Consular officers noted that they can ask questions and obtain information about the applicant’s specialized skills, but that often further research is needed to determine if those skills are essential to the business’ operations in the United States. For example, an officer at one post we interviewed provided a hypothetical example of a pet groomer seeking an E-2 visa as an essential employee for a pet grooming service. Although one might be skeptical that pet grooming is a specialized skill and that such an employee would be considered essential, in such a situation, the officer noted that he would likely conduct further research. In doing so, he might determine that the applicant is a well-known expert who specializes in grooming certain breeds of exotic or show animals, and that the grooming service is planning to target that type of animal. Other requirements. Consular officers told us that some of the other E-2 eligibility requirements are not particularly challenging. For example, consular officers at all 14 posts told us that it is relatively straightforward to determine if the applicant has a clear intent to depart the United States upon termination of E-2 status because applicants typically provide an affidavit attesting to their nonimmigrant intent. Further, consular officers stated that it is easy to determine if the applicant is an eligible dependent because consular officers are familiar with local identity information (e.g., birth and marriage certificates) and there are no nationality requirements for dependents. In addition to potential challenges with respect to the analysis of the eligibility requirements, consular officers at 4 of 14 posts also identified challenges in understanding business and financial documents that are provided in support of an E-2 application. For example, at one post we visited, a consular officer explained the challenges he faced in understanding U.S. tax documentation and the differences between various types of corporations. Further, consular managers at two posts stated that officers without prior knowledge in basic business concepts can find E-2 visa adjudication challenging when they first arrive at post. A manager from a third post stated that the complexity of some E-2 visa cases requires knowledge of business and finance acquired through substantial experience or education. More than marginal business requirement: The investment must be made in a business that has the capacity to generate more than enough income to provide a minimal living for the treaty investor or employee and family, or has the present or future capacity (generally within five years) to make a significant economic contribution. Although LES do not adjudicate visas, LES at 6 of 14 posts also indicated that they had encountered challenges with respect to the analysis of the E-2 eligibility requirements. For example, LES at one post indicated that it can be challenging to determine whether a company is more than marginal (see sidebar) because the size, type or investment sector of each E-2 company presents unique facts and circumstances. LES at one post told us that they needed additional examples of how applicants can meet the various criteria, which would help the LES flag potential areas of concern for the consular officer. Further, LES also expressed challenges in understanding some business and financial aspects of prescreening. For example, LES at two posts stated that determining the nationality of large companies can be difficult because they need to trace back ownership to the original, parent company, and that corporate structures can be very complicated. Given the complexity of adjudicating E-2 visas, the majority of consular officers and consular managers we spoke with stated that additional training and resources would be beneficial, such as online training, conferences to share best practices, or documents clarifying eligibility requirements. Specifically, consular officers at 9 of 14 posts and consular managers at 8 of the 14 posts stated that additional E-2 training or resources would be beneficial to consular officers. For example, a consular manager at one post noted that the additional resources provided on State’s intranet, such as the adjudication guide and case studies, have already helped to improve clarity on the eligibility requirements, but more resources and training are needed. Further, consular managers at 4 posts stated that additional training related to tax and business concepts would be useful. For example, one manager stated that additional training on how to read and analyze U.S. tax returns could be helpful to accurately evaluate a company’s overall financial health and make a determination that a business meets the requirement to be “more than marginal.” Further, LES at all 14 posts in our review also stated that additional training or resources would help them perform their responsibilities. For example, LES at one post we visited stated that additional training and resources that clarify the eligibility standards would allow them to better prepare application packages for the consular officers to adjudicate. Further, consular managers at 9 of the 14 posts in our review also stated that additional training and guidance for LES would be helpful. For example, one consular manager suggested that State develop an online training course for both E-2 adjudicating officers and LES that reviews common business documents. Another manager stated that a training or workshop would provide opportunities to LES and E-2 adjudicating officers to learn best practices from other posts that adjudicate E-2 visas. Although State provides guidance and training on adjudicating E-2 visas, consular officers, managers, and LES identified challenges in the E-2 adjudication process, such as ensuring adjudicators adequately understand supporting financial and business documents. Many of these officials indicated that given the complexity of E-2 adjudications, additional training and resources would help them in making E-2 eligibility determinations. State officials noted that eligibility requirements are broadly defined so as to cover various business types and investment amounts. According to the Standards for Internal Control in the Federal Government, management establishes expectations of competence for key roles to help the entity achieve its objectives, which requires that staff have the relevant knowledge, skills, and abilities, needed to carry out their responsibility. Such knowledge, skills, and abilities can be obtained by on-the-job training, formal training, and other training resources, which should be available to all staff performing such roles, regardless of their post. Providing additional E-2 training or related resources would help better ensure that all consular officers and LES prescreening and adjudicating these visas have the necessary knowledge, skills, and abilities to carry out their responsibilities effectively. Such training or other resources should cover topics that include information on E-2 eligibility requirements and how to understand business- and tax-related documents. USCIS Immigration Officers Identified Challenges in Adjudicating Petitions and Noted Ways in Which They Address Them USCIS immigration officers we spoke with communicated challenges with respect to the analysis of E-2 eligibility requirements, but explained that they are able to overcome these challenges with local resources. For example, USCIS immigration officers indicated that it is sometimes challenging to determine whether a prospective “essential employee” has requisite special qualifications, or a business is “more than marginal.” For example, immigration officers indicated that determining if an employee is considered essential depends on the relevant facts and circumstances. Further, immigration officers noted that the non-marginality eligibility requirement can be difficult to determine in some cases because the officer may have to project how successful the business will be in the future. However, the immigration officers explained that their colocation with all of the other immigration officers who adjudicate E-2 petitions helps to mitigate the challenges because the officers can coordinate with each other to determine how USCIS has typically adjudicated such cases. Generally, the USCIS immigration officers stated that additional training or resources for E-2 adjudication was not needed. E-2 Company Registration Programs Create Processing Efficiencies at Some Posts But State Does Not Have Minimum Standards for Program Implementation As of April 2019, 7 of the top 10 E-2 adjudicating posts worldwide have implemented E-2 company registration programs. An E-2 company registration program is a process by which posts assess companies against applicable E-2 eligibility requirements. Companies that meet eligibility requirements are placed on an approved or registered companies list. Companies on the registered list do not have to be reassessed for eligibility each time one of their employees seeks an E-2 visa, which creates processing efficiencies for these posts. Consular managers stated that E-2 company registration programs are intended to give consular officers reasonable assurance that a company meets the minimum E-2 business and investment eligibility requirements, allowing the adjudicating officer to focus the majority of their effort on evaluating the applicant ‘s E-2 eligibility. In fact, we found that at posts with E-2 company registration programs, the consular officer may not need to collect or review any supporting documentation related to the company prior to adjudicating the visa. In contrast, E-2 adjudicating posts without an E-2 company registration program would assess both the company and the applicant against the E-2 eligibility criteria each time they review and adjudicate an E-2 visa application. While State has identified E-2 company registration programs as a potential best practice, these programs are not mentioned in the FAM and State has not developed guidance or minimum standards for how these programs should be implemented. Instead, State has permitted posts to develop and implement their own registration programs, which has led to variation in how the programs are implemented depending on post- specific factors. Specifically, we found that posts with E-2 company registration programs varied in three ways: Registration criteria: Three of the 7 posts with E-2 registration programs require all companies to register, while the remaining 4 posts established criteria so that only certain companies can register, such as large companies or companies with multiple E-2 visa issuances. For example, at one post, only companies with more than 500 employees in the United States are allowed to register. At posts that require all companies to register, the number of registered companies ranged from approximately 2,200 to 4,000. At posts that allow only certain companies to register, the number of registered companies ranged from about 100 to 200. Documentation requirements: Employees of E-2 registered companies seeking to obtain an E-2 visa provide different types of documentation during their E-2 adjudication, depending on the requirements of the post. For example, at two posts, applicants of registered E-2 companies must provide their resume and a company letter that outlines the applicant’s specific role within the company, and do not need to provide any other supporting documentation regarding the company or underlying investment. At these posts, consular officers review their E-2 company registration database to ensure that the company in question is registered with the post’s E-2 company registration program. Revetting policy: Two of 7 posts with E-2 company registration programs vet registered companies annually while the remaining five posts vet companies every 5 years. Consular managers added that if changes, such as changes in ownership, occur without the post knowing it, prospective applicants may no longer be eligible for the visa. However, according to consular managers, companies on the list are required to contact their post sooner than the 5- or 1- year renewal period if there are any changes in the company that would impact visa eligibility for company investors or employees. Although such programs may allow posts to more efficiently adjudicate E- 2 visas, the variation in these programs may result in different processing of companies and applicants across posts, as well as acceptance of varying levels of risk by posts. The more time a post allows companies before reassessing the company’s eligibility for registration, the more risk that post is assuming, as the companies may no longer meet the eligibility requirements and continue to send or keep employees in the United States on E-2 visas for which they are not eligible. According to Standards for Internal Control in the Federal Government, management should design and implement policies and procedures that enforce management’s directives to achieve the entity’s objectives and address related risks. However, State’s Bureau of Consular Affairs has not provided posts with minimum standards governing the implementation of E-2 company registration programs, and thus, it is unclear whether the variations among these programs are consistent with the agency’s requirements and objectives. Establishing minimum standards for posts that choose to implement such programs would better ensure that all posts’ E-2 visa adjudication processes are aligned with State’s policies, objectives, and risk tolerance. Some State E-2 Application Documents Were Not Retained as Required State and USCIS require certain information and documents be retained for all E-2 applications and petitions; however, during our file review of State and USCIS E-2 adjudications, we identified that some required documents were missing from State files; USCIS was able to provide copies of all the documents required to be retained for each file we reviewed. State. State’s FAM includes requirements related to the collection of E-2 visa application information for all E-2 principals (i.e. investors, managers, and essential employees). Principal investors provide their information when they complete their application online, which is automatically uploaded to State’s consular database system. However, managers and essential employees provide some information by completing a paper form DS-156E, and the FAM requires officials to scan the forms each applicant’s record. On the basis of our file review, we estimate that about 20 percent of fiscal year 2018 E-2 application files for managers and essential employees were missing required documentation, either in part or in full. Specifically, 14 percent of E-2 applications were missing the entire DS- 156E, and 8 percent (6 of 80) were missing pages of the DS-156E. According the Standards for Internal Control in the Federal Government, management performs ongoing monitoring of the design and operating effectiveness of the internal control system as part of the normal course of operations. Ongoing monitoring includes regular management and supervisory activities. According to State officials, the responsibility for ensuring that document retention is consistent with standards rests with posts, and consular managers are responsible for ensuring compliance. State officials noted that the Bureau of Consular Affairs does not have an ongoing monitoring process in place to ensure that posts are complying with the FAM requirement. Developing a process to ensure that posts are retaining all required E-2 visa documentation by monitoring implementation of the requirement could better position State to be able to access applicant information, should it be needed for law enforcement, anti-fraud, or security purposes later. USCIS. According to USCIS officials, USCIS requires the I-129 petition, supporting documentation, and decision letters for refused petitions to be retained for all petitioners. As part of our review of petition files, we requested 124 randomly selected fiscal year 2018 petition files for investors, managers, and essential employees. In response, USCIS was able to provide us with all of the required elements for each of the petition files. State and USCIS View Risk of E-2 Fraud Differently and Interagency Coordination On E-2 Fraud Efforts Is Limited State Has Resources Available to Consular Officers to Help Identify Potential Fraud, but State Generally Considers E-2 Visa Fraud to Be Low Risk State has resources to help combat nonimmigrant visa fraud, including for E-2 visas. State officials said that the resources available and the steps they take if E-2 fraud is suspected are similar for all types of visa fraud. If a consular officer reviewing an E-2 visa application suspects fraud— either during prescreening or after the interview—the officer is to make a fraud referral to the post’s fraud prevention manager or to diplomatic security officials. According to State officials, not every case with potential fraud concerns will be referred for additional investigation. If a consular officer does not find the applicant to be qualified or overcome immigrant intent, officers may refuse the case without additional fraud assessments. Fraud prevention managers, who are part of State’s Bureau of Consular Affairs, investigate fraud cases and provide information on fraud trends to consular officers. At some posts, State’s Bureau of Diplomatic Security’s ARSO-Is specialize in criminal investigations of visa fraud and coordinate with local law enforcement. Both fraud prevention managers and ARSO-Is are to conduct additional research to determine if fraud exists, such as through open source searches, interviews, and coordination with other U.S. and local government entities. State officials we spoke with stated that they take fraud in all visa fraud categories seriously, but generally consider E-2 visa fraud to be lower risk relative to other visa categories because they believe the large amount of complex paperwork required for the visa would discourage malicious actors. For example, consular officers at 12 of the 14 posts we interviewed stated that E-2 visas were a low fraud risk. Similarly, consular managers at 10 of the 14 posts stated that E-2 visa fraud was generally not a concern at their post. State headquarters officials attributed the low fraud risk to the large amount of paperwork that is required, which includes complex financial documents and U.S. government produced tax forms. For example, State headquarters officials indicated that, given the documentation burden for both the applicant and the company, the E-2 nonimmigrant classification may be less susceptible to fraud than other nonimmigrant classifications. According to State’s E-2 fraud data, the number of E-2 fraud referrals has decreased since fiscal year 2015, but the number of confirmed fraud cases was consistent from fiscal years 2014 through 2018, as shown in figure 15. There was an initial increase in referrals from fiscal year 2014 to 2015, which State officials attributed to consular staff more consistently making such requests through the official system of record rather than by email. From fiscal years 2015 through 2018 the number of E-2 visa fraud referrals decreased each year, from 664 in fiscal year 2015 to 280 in fiscal year 2018. Throughout this time period, the number of confirmed fraud cases stayed about the same, ranging from 39 to 59 cases per year. Although consular officials at 12 of the 14 posts considered E-2 visas to be low fraud risk, consular officers also identified country-specific E-2 fraud trends and indicators that they monitored at their post, as appropriate, such as the type of business, the location of the business, or the nationality of the applicant. Some of the posts in our review have taken additional actions to address E-2 fraud, such as additional fraud reviews and conducting validation studies: Additional fraud review: Consular managers at one post told us that the post has devoted additional resources to ensure that all E-2 visa applications undergo an additional fraud review, given that E-2 visas can have a relatively long validity period than most nonimmigrant visas. At this post, all E-2 visa applications are sent to the fraud prevention manager and the ARSO-I, both of whom conduct additional research and look for fraud indicators. Validation study: Validation studies determine the extent to which foreign nationals who were issued visas later overstayed or misused their visa, and can be conducted by post officials for any visa classification. One post in our review conducted a validation study that focused on E-2 visas that post had issued to foreign nationals associated with food service companies (e.g., restaurants) to determine how many remained in business and how many E-2 visa holders continued to travel or stay in the United States after the business failed. According to this 2016 validation study, the post had concluded that almost one-quarter of food service companies in its study had failed within about three years, and nearly half of E-2 visa holders for those companies did not depart after the company had failed or continued to travel to the United States on their E-2 visa. According to the post’s fraud team, the study showed that even prospective E-2 visa enterprises that meet the applicable requirements at the time of application can become unqualified over time, and that adjudicators should take long-term viability into account when determining the marginality of a business. The post’s fraud team also stated that other posts may wish to consider standardized follow- ups for approved E-2 enterprises and routine confirmations of vetted E companies as the E-2 visa category continues to grow in popularity. USCIS Has Identified E-2 Fraud as a Priority and Is Analyzing Its Fraud Risk in a Pilot Project USCIS officials stated they consider E-2 fraud to be a significant issue and take several steps to identify fraud, including fraud referrals, fraud assessment technology, and site visits. First, according to USCIS officials, immigration officers reviewing the E-2 petition look for anomalies and other indicators of fraud and send a fraud referral for any potential fraud cases by forwarding the case to the service center’s fraud detection office. Immigration officers in the fraud detection office then are to conduct further research, such as reviewing open sources (e.g., company website) or may request a site visit to the business. Second, USCIS uses a fraud assessment technology on all petitions to determine if an E-2 company exists and is financially viable. Specifically, the Validation Instrument for Business Enterprises (VIBE) is a technology that helps immigration officers to determine if a business is operating, financially strong and viable, has good credit, and has not been involved in past fraud. According to USCIS officials, VIBE reviews existing business-related information on an enterprise, such as an office supply store account or utility bills, to determine if it is real and operating. Finally, immigration officers may request site visits based on their review of the application or VIBE results. During such site visits, immigration officers visit the business location to determine if the business is performing as stated in the petition and in compliance with the E-2 visa eligibility requirements. The results of the site visit are sent back to the originating location for adjudication. According to USCIS officials, if a larger conspiracy is uncovered, such as fraud involving multiple beneficiaries, the immigration officer may make a referral to U.S. Immigration and Customs Enforcement for further criminal investigation and potential prosecution, but added that this is very rare. USCIS immigration officers made 252 requests for site visits based on VIBE results from fiscal year 2014 through 2018 for E-2s. Of these site visits, USCIS determined there was confirmed fraud for 25 percent (63), as shown in figure 16. Of the 63 confirmed fraud cases, 42 enterprises were not located at the site provided in the petition and 14 enterprises had provided fraudulent documents or otherwise mispresented the facts. For example, in one case, the beneficiary paid a dental laboratory to assign her in a fictitious position of office manager so that she could obtain E-2 status, but the beneficiary had never worked there. In another example, an investor seeking E-2 status in May 2015 submitted a petition based on a discount store that had gone out of business in January 2013. According to USCIS officials, when fraud is confirmed, the immigration officer will deny the petition, review any pending or previously approved petitions from the petitioner, and fraud finding will be entered into VIBE, which affects the applicant’s ability to obtain future immigration benefits, including visa application or petition approvals from the United States government. State consular officers can also request that USCIS conduct site visits to help in its adjudication of E-2 visa applications, but USCIS data indicate that such requests are rare. According to USCIS, the agency received 10 external site visit requests from State from fiscal years 2014 through 2018. Of the 10 requests, USCIS conducted site visits to seven businesses and found one incidence of fraud involving a restaurant. According to State officials, site visits are considered to be resource intensive for the USCIS and can take several weeks or months to complete. The officials added that if a consular officer determines that an applicant is unqualified for the visa, it would not be considered an effective use of the post’s resources to conduct additional investigations or request a U.S.-based site visit from USCIS. Based on the results of the site visits and other factors, USCIS officials stated that they have prioritized E-2 fraud, and initiated a site visit pilot program in February 2018 to better determine the extent to which fraud exists. This pilot program focuses on businesses associated with individuals approved for an E-2 status extension and certain eligibility criteria. According to USCIS officials in July 2019, the most commonly encountered fraud or noncompliance issues thus far have involved enterprises that were not operational, not engaged in any business activities, or were not operating as stated in the petition. USCIS plans to continue the E-2 pilot into fiscal year 2020 and to share the results with State. State and USCIS Efforts to Coordinate E-2 Anti-Fraud Activities Are Limited State’s and USCIS’s respective roles in the E-2 process, along with a current lack of coordination on E-2 anti-fraud efforts, may contribute to the differences in the way the agencies view and prioritize the risks of E-2 fraud. Drawing on the results of its site visit pilot project, USCIS has said it views E-2 fraud as a significant issue and plans to prioritize efforts to combat E-2 fraud moving forward. While State has taken some steps to examine and combat E-2 visa fraud, officials we spoke with at posts and at headquarters told us that E-2 fraud is rare and generally low risk. The E-2 validation study that one post conducted, noted earlier, also provided evidence that E-2 fraud occurred, at least in that business sector from that particular country. While it is possible that additional validation studies across different posts and business sectors would uncover fraud trends, State officials noted that validation studies are resource intensive, and that E-2 visas represent only a small fraction of the total visas they adjudicate each year. Therefore, State officials stated that such studies are likely to be focused on more common visa types, such as tourist and business visitor visas. Although some factors may explain why USCIS and State view the risk of E-2 fraud differently, both agencies encounter foreign nationals seeking the E-2 status in the United States. Officials from both agencies stated that USCIS may be more likely to uncover fraud than State because USCIS processes E-2 status extensions for individuals already in the United States. E-2 principals (i.e., investors, managers, and essential employees) would have had up to 2 years to try to run, manage, or work for their business, with the intention to depart at the conclusion of their authorized period of stay. If they failed, gave up, or ended employment, but still sought an E-2 status extension, any materially false representations made as to their eligibility could be considered fraudulent. Officials from both agencies suggested that State may be adjudicating visas for more new businesses, which may qualify at the time of initial adjudication but could ultimately fail. However, during our observations and file reviews, we found that USCIS also adjudicates petitions for new businesses for beneficiaries seeking to change to E-2 status, and State also adjudicates E-2 visa applications for existing businesses that have previously been associated with E-2 visa holders. Further, neither State nor USCIS collect data that track the number of new businesses seeking E-2 status for their employees. As such, we cannot verify the accuracy of this reason for explaining why or if USCIS is more likely to encounter fraud among individuals seeking E-2 status than State. Both State and USCIS collect information that could potentially be useful to each other’s activities to identify and address E-2 fraud, but the agencies do not have a mechanism for regular coordination on fraud. For example, as previously noted, consular officers adjudicating E-2 visas overseas learn to identify country-based fraud trends as well as trends specific to E-2 visas. USCIS immigration officers can identify similar trends, and the results of USCIS’s site visits may further identify potential fraud trends that would be useful for State consular officers. However, interagency coordination is ad hoc, generally among headquarters officials only, and relatively rare. For example, both State and USCIS officials stated that the main formal mechanism of coordination on all E-2 visa issues is a quarterly teleconference. However, such meetings were cancelled 7 out of 8 times in fiscal years 2017 and 2018 because officials did not identify agenda topics to discuss, according to State and USCIS officials. Further, such meetings have not included discussions of E-2 fraud issues. State officials stated that they share country fraud summaries with USCIS. However, these fraud summaries do not focus on E-2 visas, but fraud trends more generally. According to A Framework for Managing Fraud Risks in Federal Programs, agencies should establish collaborative relationships with stakeholders to share information on fraud risks and emerging fraud schemes, as well as lessons learned related to fraud control activities. Managers can collaborate and communicate through a variety of means, including task forces, working groups, or communities of practice. Although State and USCIS have some informal mechanisms in place to share fraud-related information, such as emails among headquarters officials and by sharing high-level country fraud reports, formal information sharing mechanisms have not been regularly operating. Although the two entities view the risk of E-2 fraud visa differently, both State’s and USCIS’ E-2 antifraud efforts would benefit from ensuring that they regularly share information on fraud risks. Doing so will help both entities to better identify emerging fraud trends, prevent foreign nationals from fraudulently obtaining E-2 status, and identify areas for potential collaboration and resource sharing. Conclusions The E-2 nonimmigrant classification helps to facilitate foreign investment in the United States, which contributes to the U.S. economy each year. State and USCIS share the responsibility for adjudicating thousands of E- 2 visa applications and petitions annually for foreign nationals seeking E- 2 status. Both State and USCIS officials stated that given the complexity of adjudicating E-2 applications and petitions, and the level of documentation and time required, the E-2 adjudication process can present challenges with respect to the analysis of E-2 eligibility requirements. State consular officers, managers, and LES noted that additional training and resources are needed to help them better understand the eligibility requirements and supporting financial and business documents. Enhancing E-2 training and providing additional resources such as documents clarifying E-2 eligibility requirements would help better ensure that consular officers and LES prescreening and adjudicating these visas have the necessary knowledge, skills, and abilities to carry out their responsibilities effectively across posts worldwide. Additionally, some overseas State posts have developed E-2 company registration programs to more efficiently process and adjudicate E-2 visa applications. Although there are benefits to such programs, the variation in the standards of these programs may result in different processing of companies and applicants across posts, as well as acceptance of varying levels of risk by posts. Establishing guidance or minimum standards for posts that choose to implement such programs would better ensure that all posts’ E-2 visa adjudication processes are consistent with State’s policies, objectives, and risk tolerance. Further, State and USCIS require certain information and documents be retained for all E-2 applications and petitions; however, during our file review of State and USCIS E-2 adjudications, we identified that some required documents were missing from State files. Ensuring that posts retain all required E-2 documentation would better position State to be able access applicant information, which could be needed for law enforcement, anti-fraud, or security purposes later. Finally, although State and USCIS collect information that could potentially be useful to each other’s activities to address E-2 fraud, coordination between State and USCIS on E-2 fraud has been ad hoc, generally among headquarters officials only, and relatively rare. Developing regular coordination mechanisms would help both entities to better identify emerging fraud trends and prevent foreign nationals from fraudulently obtaining E-2 status. Recommendations for Executive Action We are making the following five recommendations to State and USCIS: The Assistant Secretary of State for Consular Affairs should provide additional training or related resources to consular officers and locally employed staff on adjudicating E-2 visas, to cover topics that include the E-2 eligibility requirements and understanding business- and tax- related documents. (Recommendation 1) The Assistant Secretary of State for Consular Affairs should develop minimum standards for E-2 company registration programs, such as standards for how often companies are to be re-vetted. (Recommendation 2) The Assistant Secretary of State for Consular Affairs should develop and implement a process to ensure that posts maintain required E-2 visa application documentation. (Recommendation 3) The Secretary of State, in coordination with the Director of USCIS, should establish regular coordination mechanisms to share information on E-2 fraud risks. (Recommendation 4) The Director of USCIS, in coordination with the Secretary of State, should establish regular coordination mechanisms to share information on E-2 fraud risks. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to State and DHS for their review and comment. State and DHS provided written comments, which are reproduced in appendices IV and V, respectively. Both State and DHS concurred with our recommendations. State and DHS also provided technical comments, which we incorporated as appropriate. State concurred with all four recommendations addressed to it in the report (recommendations 1, 2, 3, and 4), and described actions it plans to take in response. To address recommendation 1, State plans to increase the frequency and specificity of E-2 content through webinars, workshops, and guidance, and by developing subject matter experts domestically who can provide consultative services on an as-needed basis for business and tax-related documents. To address recommendation 2, State plans to require a minimum 5-year mandatory review of companies registered at any post using a company registration program. To address recommendation 3, State plans to reinforce its E-2 visa documentation retention policy in regular policy guidance to consular managers. To address recommendation 4, State plans to hold regular, high-level coordination meetings with USCIS to include coordination on E visa adjudication standards. DHS concurred with recommendation 5, and stated that the department plans to share the results of its site visits during quarterly coordination meetings with State. These actions, if effectively implemented, should address the intent of our recommendations. We are sending copies of the report to the Acting Secretary of Homeland Security, Secretary of State, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov or Jason Bair at (202) 512-6881 or bairj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology This report reviews the Department of State’s (State) and Homeland Security’s (DHS) U.S. Citizenship and Immigration Services’ (USCIS) oversight and implementation of E-2 adjudications. Specifically, this report examines (1) the outcomes and characteristics of foreign nationals who have sought or received E-2 status during fiscal years 2014 through 2018, (2) State’s and USCIS’s policies and procedures to ensure that individuals meet E-2 eligibility requirements, and (3) State’s and USCIS’s efforts to assess and address potential fraud in the E-2 adjudication process. To determine the outcomes and characteristics of foreign nationals who have sought or received E-2 status, we analyzed data from State’s Bureau of Consular Affairs and USCIS on E-2 visa applications and petitions adjudicated from fiscal years 2014 through 2018. For example, the data we analyzed included E-2 role (e.g., investor, manager, essential employee, and dependents), adjudication outcome (i.e., issued or refused), and nationality, among other data points. To assess the reliability of the E-2 data, we interviewed State and USCIS officials that maintain the data and checked the data for missing information, outliers, and obvious errors, among other actions. For example, we identified and removed duplicate entries in State’s data. On the basis of these steps, we determined that the data were sufficiently reliable for the purposes of our reporting objectives, including providing summary statistics on E-2 adjudications, outcomes, and the characteristics of those seeking E-2 status. To obtain additional data points, such as types of business and investment amount, we analyzed generalizable stratified random samples of E-2 visa applications and petitions adjudicated in fiscal year 2018. Specifically, we reviewed 124 E-2 petitions from USCIS and 120 State applications for E-2 investors, managers, and essential employees. The documents in our file review included, for example, State’s DS-160 online nonimmigrant visa application and DS-156E supplemental application, USCIS’s I-129 petition for nonimmigrant workers, and supporting documents, when available. To collect information from the applications and petitions, we created a data collection instrument and established standard procedures to ensure that we accurately collected the information from the original forms. We chose sample sizes to achieve precision levels for a percentage estimate of plus or minus 10 percentage points for important sub-populations, such as denied petitions and role (e.g., investor, manager, and essential employee). As a result, all percentage estimates presented in this report have a precision of plus or minus 10 percentage points or fewer, unless otherwise noted. Further, we classified the types of businesses in the applications and petitions using the North American Industry Classification System by conducting a content analysis of the business description field in the applications and petitions to group related business types into larger groups, such a food service and manufacturing. Further, we also collected and analyzed data and information from USCIS and U.S. Customs and Border Protection on post E-2 adjudication outcomes, including changing status from E-2 to another nonimmigrant category, adjusting from E-2 status to lawful permanent residency, and E- 2 nonimmigrants who remain in the United States beyond the expiration of their authorized period of stay, known as overstays. We present the results of this analysis in Appendix III. To assess the reliability of these data, we interviewed officials that maintain the data and checked the data for missing information, outliers, and obvious errors, among other actions. On the basis of these steps, we determined that the data were sufficiently reliable for the purpose of providing summary statistics on E-2 post adjudication outcomes. To assess State and USCIS policies and procedures to ensure that individuals meet E-2 eligibility requirements, we reviewed relevant State and USCIS guidance documentation, including State’s Foreign Affairs Manual and USCIS’s E-2 standard operating procedures. We also reviewed relevant provisions of the Immigration and Nationality Act and implementing regulations, which set forth the E-2 eligibility requirements. We interviewed officials from State’s Bureau of Consular Affairs and Foreign Service Institute, and USCIS on their respective agencies’ E-2 processes and procedures, as well as training provided to State’s consular officers and USCIS’s immigration officers. Further, we assessed State’s and USCIS’s policies and procedures to ensure that individuals meet E-2 eligibility requirements against control environment, control activities, and monitoring internal control standards in Standards for Internal Control in the Federal Government, as well as documentation retention requirements in agency guidance. We conducted site visits to State and USCIS locations that adjudicate E-2 visas and petitions, respectively. For State, we conducted site visits to four posts abroad—London, United Kingdom; Seoul, South Korea; Tokyo, Japan; and Toronto, Canada from October through December 2018. For our site visits, we selected posts that (1) were among the 10 highest E-2 adjudicating posts by volume in fiscal year 2017, (2) had different staffing models for processing E-2 visa adjudications, such as posts that had a single officer specializing in E-2 visas or posts that had all consular officers adjudicate E-2 visas, and (3) were geographically dispersed. During these visits, we observed the prescreening and adjudication of E-2 applications and used a data collection instrument to collect information on the cases we observed, such as adjudication outcome and other non- personally identifiable information about the case. We interviewed consular officers and managers, locally employed staff (LES), fraud prevention managers, and the assistant regional security officer- investigators (ARSO-I), where available, about topics such as E-2 visa adjudication policies, procedures, resources and training available at post. Our observations from these site visits provided useful insights into State’s E-2 adjudication procedures, but are not generalizable to all posts that adjudicate E-2 visas. For USCIS, in November 2018, we visited the California Service Center in Laguna Niguel, California—which is the only USCIS service center that adjudicates E-2 petitions—to observe E-2 petition adjudications and interview USCIS officials. In addition to our site visits, we conducted telephonic interviews with consular officers and LES who are responsible for prescreening and adjudicating E-2 visa applications at the remaining six of the top 10 posts in terms of E-2 annual adjudications, as well as four randomly selected low-volume posts. The 4 low-volume posts were selected at random from a list of posts that had adjudicated at least 100 E-2 visa applications in fiscal year 2017. We collected copies of post-specific standard operating procedures and local E-2 visa adjudication tools (e.g., checklists), as available, from the 14 posts we visited or interviewed. Further, we reviewed written responses from the consular managers responsible for supervising E-2 visa adjudications at these 14 posts to a set of questions regarding E-2 adjudication processes and procedures, challenges, E-2 company registration programs, and E-2 training. To determine the efforts that State and USCIS take to assess and address E-2 fraud, we reviewed relevant State and USCIS standard operating procedures and guidance. We interviewed headquarters officials from State and USCIS, such as State’s Office of Fraud Prevention Program and USCIS’s Fraud Detection and National Security Directorate, on how both agencies identify and address potential E-2 fraud and what, if any, coordination or information sharing occur between State and USCIS. During our 4 site visits abroad, we interviewed officials, such as fraud prevention managers and ARSO-Is, on anti-fraud efforts for E-2 visas at their posts, including potential fraud trends. Similarly, we interviewed immigration officers at USCIS’s California Service Center on their anti-fraud efforts for E-2 petitions. We obtained data from State and USCIS on fraud referrals—that is, cases sent to fraud experts for additional research and review—and the results of fraud site visits from fiscal year 2014 through 2018. To assess the reliability of these data, we interviewed State and USCIS officials that maintain the data and checked the data for missing information, outliers, and obvious errors, among other actions. On the basis of these steps, we determined that the data were sufficiently reliable for the purposes of our reporting objectives, including providing summary statistics on fraud referrals and the results of fraud site visits. Further, we assessed State’s and USCIS’s anti-fraud efforts against best practices found in A Framework for Managing Fraud Risks in Federal Programs. We conducted this performance audit from July 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: List of Treaty Countries Eligible for E-2 Status The Immigration and Nationality Act requires the existence of a qualifying treaty of commerce and navigation between the United States and a foreign state in order for E-2 visa classification to be accorded to nationals of that foreign state. According to Department of State guidance, such qualifying treaties may include treaties of friendship, commerce and navigation, and bilateral investment treaties. As of June 2019, nationals of the 82 countries listed in Table 7 may be accorded E-2 status pursuant to a qualifying treaty, or pursuant to legislation enacted to extend that same privilege. Appendix III: E-2 Adjudication Statistics This appendix presents various statistics on adjudications by State for E-2 visas as well as those by U.S. Citizenship and Immigration Services (USCIS) for E-2 petitions for fiscal years 2014 through 2018. We present these data broken out by fiscal year, outcome (e.g., issued or refused), type (e.g., investor, manager, essential employee, dependent), country of nationality or birth, reason for refusal, and prior nonimmigrant status, if available. Further, we also provide statistics on some post-adjudication outcomes—that is, data on characteristics of those who obtained E-2 status. These outcomes include changes to another nonimmigrant status or lawful permanent residency, or the extent to which E-2 status holders remained in the United States beyond their authorized period of stay, known as overstaying. State For the purposes of this appendix, there are four potential roles for foreign nationals seeking E-2 status. First, a foreign national who has committed funds to a U.S. enterprise and is in a position to develop and direct the operations of the enterprise in which he or she has invested substantial capital is known as an investor. Second, a foreign national employee in an executive or supervisory position is known as a manager. Third, a foreign national employee, in a lesser capacity than a manager, but having special qualifications essential to successful or efficient business operations, is known as an essential employee. Finally, the spouse or qualifying child of an investor, manager, or essential employee is known as a dependent. State consular officers will adjudicate the visa application as either issued or refused. A foreign national seeking E-2 status as an investor, manager, or essential employee is known as a principal, and a spouse or qualifying child of a principal is known as a dependent. Foreign nationals seeking E- 2 status through USCIS use different forms based on whether they are a principal or a dependent. USCIS immigration officers will generally adjudicate the petition as either approved or denied. Post Adjudication Outcomes for E-2 Status Holders Change of Status From E-2 to Another Nonimmigrant Category. From fiscal years 2014 through 2018, about 5,000 foreign nationals sought to change from E-2 status to another nonimmigrant status. As shown in figure 17 and table 16, most of these requests were to change to academic student status (F-1, 31 percent), temporary workers in specialty occupation status (H-1B, 10 percent), tourist status (B-2, 9 percent), and intracompany transferee executive or manager status (L-1A, 7 percent), as well as dependents of these statuses. Further, about 11 percent of these foreign nationals were requesting to change from one role within E- 2 status to another. As previously noted, this could include, for example, a spouse of an E-2 investor later seeking to work at the company as a manager. Adjusting from E-2 Status to Lawful Permanent Resident. From fiscal years 2014 through 2018, over 22,000 foreign nationals changed from E- 2 status to lawful permanent residents. The large majority of these (73.1 percent) were employment-based (i.e., sponsored by a U.S. employer), as shown in figure 18 and table 17. Overstays. According to DHS data, a relatively low percentage of foreign nationals with E-2 status—obtained either through an E-2 visa from State or an approval to change to, or extend, their E-2 status from USCIS— overstayed their authorized period of admission compared to other nonimmigrant statuses. From fiscal years 2016 through 2018, DHS reported that the total overstay rate decreased slightly from 1.5 percent to 1.2 percent. Similarly, the overstay rate for E-2 status for the same years decreased from 0.8 percent from 0.6 percent, as shown in table 18. As we previously reported, U.S. Customs and Border Protection (CBP) implemented system changes in 2015 that allowed CBP to identify E-2 overstays, along with other nonimmigrant categories beginning in fiscal year 2016. DHS officials stated that the process to track E-2 visa overstays is the same as with other visa categories. They noted that specific visa categories are not prioritized; CBP and U.S. Immigration and Customs Enforcement focus on those overstays where the individual is identified as a national security or public safety risk. Appendix IV: Comments from the Department of Homeland Security Appendix V: Comments from the Department of State Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the individuals named above, Adam Hoffman (Assistant Director), Kim Frankena (Assistant Director), Erin O’Brien (Analyst-in- Charge), Juan Pablo Avila-Tournut, Kristen E. Farole, James Ashley, Caitlin Cusati, Eric Hauswirth, Amanda Miller, Sasan J. “Jon” Najmi, Adam Vogt, and K. Nicole Willems made significant contributions to this report.
Why GAO Did This Study Foreign nationals from 82 countries may obtain E-2 nonimmigrant investor status in the United States. The E-2 nonimmigrant classification allows an eligible foreign national to be temporarily admitted to the United States to direct the operations of a business in which they have invested a substantial amount of capital, or to work in an approved position (e.g., manager or essential employee). To obtain E-2 status, a foreign national can apply through State for an E-2 visa abroad, or if already in the United States, by petitioning USCIS to extend or change to E-2 status. GAO was asked to review State's and USCIS' E-2 adjudication process. This report addresses: (1) outcomes and characteristics of foreign nationals who sought or received E-2 status from fiscal years 2014 through 2018, (2) policies and procedures for ensuring that individuals meet E-2 eligibility requirements, and (3) efforts to assess and address potential E-2 fraud. GAO analyzed State and USCIS data on E-2 adjudications, generalizable samples of E-2 visa applications and petitions, and relevant documents. GAO interviewed officials at 14 State posts abroad, selected based on E-2 application volume and other factors, and observed E-2 adjudications at four of these posts and USCIS's California Service Center. What GAO Found The Department of State (State) and U.S. Citizenship and Immigration Services (USCIS) annually adjudicated about 54,000 visa applications or petitions from fiscal years 2014 through 2018 for foreign nationals seeking E-2 nonimmigrant status, over 80 percent of which were approved. About eighty percent of E-2 adjudications were for State visa applications, and the remaining 20 percent were for USCIS petitions to extend or change to E-2 status. Generally, about half of the foreign nationals seeking E-2 status were investors, managers, or essential employees of an E-2 business, and the other half were their spouses or children. State and USCIS have guidance, procedures, and training intended to help consular and immigration officers ensure foreign nationals meet E-2 eligibility requirements; however, officials GAO interviewed from both agencies identified challenges in the E-2 adjudication process. State. Consular officers noted that E-2 visa adjudications are complicated and resource-intensive, often requiring more documentation and time to complete than other visas. For example, the requirement that the investment in the business be substantial does not prescribe a minimum capital amount. Rather, the investment must be large enough to support the likely success of the business, among other criteria. Consular officers at 10 of 14 posts GAO interviewed indicated that determining the investment's substantiality is difficult for newly encountered business types. Providing additional E-2 training or related resources would help ensure that consular officers and locally employed staff have the necessary knowledge and abilities to carry out their responsibilities. USCIS. Officials identified similar challenges with respect to E-2 adjudications. However, officials stated that colocating immigration officers who adjudicate E-2 petitions helps to mitigate the challenges because the officers can communicate with each other on how USCIS has typically adjudicated such cases. State and USCIS have resources to address E-2 fraud, which includes submitting falsified documents or making false statements material to the adjudication; however, coordination on E-2 anti-fraud efforts is limited. State has anti-fraud efforts in place for all nonimmigrant visa types, but State officials stated that they consider E-2 visa fraud to be lower risk compared to other visas because the large amount of complex paperwork required for the E-2 visa discourages malicious actors. USCIS officials consider E-2 fraud to be a significant issue and have taken steps to identify fraud, such as using fraud assessment technology to determine if a business is financially viable and conducting site visits if fraud is suspected. Both State and USCIS collect information that could be useful to each other's anti-fraud efforts, but interagency coordination on E-2 fraud issues is ad hoc and relatively rare. For example, the main formal mechanism of coordination on E-2 visa issues—a quarterly teleconference—was cancelled 7 out of 8 times in fiscal years 2017 and 2018. Coordinating regularly on fraud issues, which is a best practice from GAO's Fraud Risk Framework, will help both entities to better identify emerging E-2 fraud trends and areas for potential resource sharing. What GAO Recommends GAO is making five recommendations, including that State provide more E-2 training or resources to consular officers, and that State and USCIS establish a regular coordination mechanism to share information on E-2 fraud risks. State and USCIS concurred with all five recommendations.
gao_GAO-20-317T
gao_GAO-20-317T_0
The Federal Government Has Invested in Projects That May Convey Some Climate Resilience Benefits but Does Not Have a Strategic Investment Approach As we reported in October 2019, the federal government has invested in projects that may enhance climate resilience but does not have a strategic approach for investing in high-priority climate resilience projects. Some federal agencies have made individual efforts to manage climate change risk within existing programs and operations, and these efforts may convey climate resilience benefits. For example, the U.S. Army Corps of Engineers’ civil works program constructs flood control projects, such as sea walls, that could convey climate resilience benefits by protecting communities from storms that may be exacerbated by climate change. However, even with individual agency efforts, federal investment in projects specifically designed to enhance climate resilience to date has been limited. As we stated in our Disaster Resilience Framework, most of the federal government’s efforts to reduce disaster risk are reactive, and many revolve around disaster recovery. As a result, we reported in October 2019 that additional strategic federal investments may be needed to manage some of the nation’s most significant climate risks because climate change cuts across agency missions and presents fiscal exposures larger than any one agency can manage. Our analysis shows the federal government does not strategically identify and prioritize projects to ensure they address the nation’s most significant climate risks. In addition, our October 2019 report discusses our past work that shows an absence of government-wide strategic planning for climate change. For example, in our March 2019 update to our high-risk list, we reported that one area of government-wide action needed to reduce federal fiscal exposure is in the federal government’s role as the leader of a strategic plan that coordinates federal efforts and informs state, local, and private- sector action. For this 2019 update, we assessed the federal government’s progress since 2017 related to climate change strategic planning against five criteria and found that the federal government had not met any of the criteria for removal from the high-risk list. Specifically, since our 2017 high-risk update, four ratings regressed to “not met” and one remained unchanged as “not met.” Also, although we have made 17 recommendations that address improving federal climate change strategic planning, as of August 2019, no action had been taken toward implementing 14 of those recommendations—including one dating from 2003. Our enterprise risk management framework calls for reviewing risks and selecting the most appropriate strategy to manage them. However, no federal agency, interagency collaborative effort, or other organizational arrangement has been established to implement a strategic approach to climate resilience investment that includes periodically identifying and prioritizing projects. Such an approach could supplement individual agency climate resilience efforts and help target federal resources toward high-priority projects. Six Key Steps Provide an Opportunity for the Federal Government to Strategically Identify and Prioritize Climate Resilience Projects Six key steps provide an opportunity for the federal government to strategically identify and prioritize climate resilience projects for investment, based on our review of reports (including a National Academies report and the U.S. Global Change Research Program’s Fourth National Climate Assessment) that discuss adaptation as a risk management process, as well as on international standards, our past work (including our enterprise risk management criteria), and interviews with stakeholders. The six key steps are (1) defining the strategic goals of the climate resilience investment effort and how the effort will be carried out, (2) identifying and assessing high-risk areas for targeted resilience investment, (3) identifying potential project ideas, (4) prioritizing projects, (5) implementing high-priority projects, and (6) monitoring projects and climate risks. (See fig. 1.) In our October 2019 report, we used one domestic and one international example to illustrate these key steps: Louisiana’s Coastal Protection and Restoration Authority (CPRA) coastal master planning effort and Canada’s Disaster Mitigation and Adaptation Fund (DMAF). In the domestic example, to address the lack of strategic coordination, in 2005 the state of Louisiana consolidated coastal planning efforts previously carried out by multiple state entities into a single effort, led by CPRA. CPRA periodically identifies high-priority coastal resilience projects designed to address two primary risks: flooding and coastal land loss. To identify potential projects, CPRA sought project proposals from citizens, nongovernmental organizations, and others. To prioritize projects, CPRA used quantitative modeling to estimate project outcomes under multiple future scenarios of varied climate and other conditions and coordinated with stakeholders to understand potential project impacts. CPRA has published three coastal master plans in which it identified and evaluated potential projects. For example, in its 2017 Comprehensive Master Plan for a Sustainable Coast, CPRA identified $50 billion in high- priority projects to be implemented as funding becomes available. In the international example, in 2018 the Canadian government launched the DMAF, a financial assistance program, to provide $1.5 billion (in U.S. dollars) over 10 years for large-scale, nationally significant projects to manage natural hazard risks, including those triggered by climate change. Infrastructure Canada, the entity responsible for administering the DMAF, seeks project ideas from provinces and territories, municipal and regional governments, indigenous groups, and others. These entities apply directly to Infrastructure Canada for funding. According to Canadian officials, two committees of experts—one composed of experts from other federal departments and the other composed of nonfederal experts (e.g., urban planners and individuals with regional expertise)—provide feedback on potential projects. These projects are prioritized based on multiple criteria such as the extent to which they reduce the impacts of natural disasters. Options for Focusing Federal Funding on High-Priority Climate Resilience Projects Have Strengths and Limitations, and Opportunities Exist to Increase Funding Impact As we reported in October 2019, on the basis of our review of relevant reports and our past work, interviews with stakeholders, and illustrative examples, we identified two options—each with strengths and limitations—for focusing federal funding on high-priority climate resilience projects. The options are (1) coordinating funding provided through multiple existing programs with varied purposes and (2) creating a new federal funding source specifically for investment in climate resilience. In addition, our analysis of these sources identified opportunities to increase the climate resilience impact of these two funding options. A strength of coordinating funding from existing sources is access to multiple funding sources for a project. For example, one stakeholder we interviewed whose community used federal funding to implement large- scale resilience projects said that having multiple programs is advantageous because when funding from one program is not available— such as when the project does not match that program’s purpose or when there are insufficient funds—funds could be sought from another program. The state of Louisiana’s coastal master planning effort also uses multi-program coordination to fund projects. Specifically, funding for high-priority resilience projects identified in the master plan is provided via several federal and nonfederal programs designed for wetlands restoration, hurricane risk reduction, oil spill recovery, and community development, among other purposes. A limitation of that option, according to CPRA officials, is that coordinating funding from multiple sources could be administratively challenging and could require dedicated staff to identify programs, assess whether projects meet program funding criteria, apply for funds, and ensure program requirements are met. Alternatively, one strength of creating a new federal funding source, such as a federal financial assistance program that could provide loans or grants or a climate infrastructure bank, is that it could encourage cross- sector projects designed to achieve benefits in multiple sectors. For example, according to one stakeholder, such a funding source could allow experts from multiple sectors—such as infrastructure, housing, transportation, and health—to collaborate on projects, leading to more creative, comprehensive approaches to enhance community resilience. However, such a new funding source would have to be created, which would require congressional authorization. In addition, we identified opportunities to increase the climate resilience impact of federal funding options based on our review of our past work, related reports, an international standard, and the Louisiana and Canadian examples, as well as interviews with stakeholders: Using both existing and new funding options. Several stakeholders told us that using both funding options—multiple, existing federal programs with varied purposes and a new funding source for high-priority climate resilience projects—in a strategic, coordinated way could help increase the impact of federal investment. Two stakeholders told us that in practice, multiple, existing federal funding sources that are not specific to climate resilience could be coordinated to fund projects when their purposes and rules align and adequate funding is available. A funding source specifically for climate resilience could be used to fund proposed projects when no related program exists or when existing programs do not have sufficient funding available, according to these and other stakeholders. Helping ensure adequate and consistent funding. Several stakeholders we interviewed identified the need for adequate and consistent funding to implement high-priority climate resilience projects. For example, according to one stakeholder we interviewed, inconsistent, inadequate funding makes it difficult to complete large- scale projects and can lead to additional costs if significant delays occur during which existing work deteriorates. In addition to adequate and consistent funding, funding options should be designed to accommodate long-term projects since high-priority climate resilience projects can take multiple years to design and implement, according to two stakeholders we interviewed. Encouraging nonfederal investment. Several stakeholders we interviewed told us that the federal government could use a federal climate resilience investment effort to encourage nonfederal investment in high-priority climate resilience projects, thereby increasing the impact of federal investment. For example, several stakeholders identified the importance of a cost-share component so that funding recipients are invested in a project’s success. Canada’s DMAF encourages nonfederal investment by partially funding projects of national significance and requiring different levels of cost-share from funding recipients, ranging from 25 percent for indigenous recipients to 75 percent for private-sector and other for-profit recipients. Several stakeholders also identified potential funding mechanisms—for example, public-private partnerships and loan guarantees—that could leverage federal dollars to encourage additional investment in climate resilience projects by nonfederal entities, including the private sector. Encouraging complementary resilience activities. To increase the impact of federal investment in climate resilience, a federal investment effort presents an opportunity to encourage complementary resilience activities by nonfederal actors such as states, localities, and private- sector partners, based on interviews with several stakeholders, the Canadian example, and reports we reviewed. For example, this could include establishing conditions that funding recipients must meet in exchange for receiving federal funding. Alternatively, the federal government could use incentives (e.g., providing greater federal cost- share or giving additional preference in the project prioritization process) to encourage complementary resilience activities by nonfederal actors. Our Disaster Resilience Framework states that incentives can make long-term, forward-looking risk reduction investments more viable and attractive among competing priorities. The federal government could use these conditions and incentives to encourage several types of complementary resilience activities by nonfederal actors. For example, the federal government could encourage the use and enforcement of building codes that require stronger risk-reduction measures. In addition, a federal investment effort could provide an opportunity to encourage communities to limit or prohibit development in high-risk areas to minimize risks to people and assets exposed to future climate hazards. One example of this would be through zoning regulations. Another stakeholder suggested that communities receiving federal funding for resilience projects should be adequately insured against future climate risks so they have a potential source of funding for rebuilding in the event of a disaster. Allowing funds to be used at various stages of project development. Several stakeholders suggested that federal funds be used for multiple stages of project development—such as project design, implementation, or monitoring—to increase the impact of federal funds. For example, two stakeholders we interviewed told us that resilience projects can require significant amounts of design work to develop an implementable and effective project concept and that making funds available for project design could improve the quality of project proposals, thereby maximizing the impact of federal funds. In addition to providing federal funds for project design, one stakeholder suggested making federal funding available to measure project outcomes (e.g., how effectively projects increased resilience) to improve future decisions by both the federal government and others making resilience investments. Based on the findings of our October 2019 report, we recommended that Congress consider establishing a federal organizational arrangement to periodically identify and prioritize climate resilience projects for federal investment. Such an arrangement could be designed using the six key steps for prioritizing climate resilience investments and the opportunities to increase the climate resilience impact of federal funding options that we identified in our report. Chairwoman Castor, Ranking Member Graves, and Members of the Select Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Mark Gaffigan at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff members who made key contributions to this testimony and the underlying report are Joseph “Joe” Thompson (Assistant Director), Celia R. Mendive (Analyst in Charge),Taiyshawna Battle and Paige Gilbreath. Also contributing to this report were Alicia Puente Cackley, Colleen M. Candrl, Kendall Childers, Steven Cohen, Christopher Curry, Cindy Gilbert, Kathryn Godfrey, Holly Halifax, Carol Henn, Susan Irving, Richard Johnson, Gwendolyn Kirby, Joe Maher, Gregory Marchand, Diana Maurer, Kirk Menard, Tim Persons, Caroline N. Prado, William Reinsberg, Oliver Richard, Danny Royer, Jeanette Soares, Kiki Theodoropoulos, Sarah Veale, Patrick Ward, Jarrod West, Kristy Williams, Eugene Wisnoski, and Melissa Wolf. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Since 2005, federal funding for disaster assistance has totaled at least $450 billion, including a 2019 supplemental appropriation of $19.1 billion for recent disasters. In 2018 alone, 14 separate billion-dollar weather and climate disaster events occurred across the United States, with total costs of at least $91 billion, including the loss of public and private property, according to the National Oceanic and Atmospheric Administration. Disaster costs likely will increase as certain extreme weather events become more frequent and intense due to climate change, according to the U.S. Global Change Research Program, a global change research coordinating body that spans 13 federal agencies. In 2013, GAO included “Limiting the Federal Government’s Fiscal Exposure by Better Managing Climate Change Risks” on its high-risk list. The cost of recent weather disasters has illustrated the need to plan for climate change risks and invest in climate resilience, which can reduce the need for far more costly steps in the decades to come. This statement summarizes GAO’s findings from its October 2019 report on climate resilience and federal investment (GAO-20-127). Specifically, it focuses on (1) the extent to which the federal government has a strategic approach for investing in climate resilience projects; (2) key steps that provide an opportunity to strategically prioritize projects for investment; and (3) the strengths and limitations of options for focusing federal funding on these projects. To perform this work, GAO reviewed about 50 relevant reports and interviewed 35 stakeholders with expertise in climate resilience and related fields, including federal officials, researchers, and consultants. GAO also identified domestic and international examples of governments that invest in climate resilience and related projects. What GAO Found The federal government has invested in individual projects that may enhance climate resilience, but it does not have a strategic approach to guide its investments in high-priority climate resilience projects. In GAO’s March 2019 update to its list of federal programs at high risk for fraud, waste, abuse, and mismanagement, or most in need of transformation, GAO reported that one area of government-wide action needed to reduce federal fiscal exposure is in the federal government’s role as the leader of a strategic plan that coordinates federal efforts and informs state, local, and private-sector action. For this 2019 update, GAO assessed the federal government’s progress since 2017 related to climate change strategic planning against five criteria and found that the federal government had not met any of the criteria for removal from the high-risk list. Further, as of August 2019, no action had been taken to implement 14 of GAO’s 17 recommendations to improve federal climate change strategic planning. Additionally, no federal agency, interagency collaborative effort, or other organizational arrangement has been established to implement a strategic approach to climate resilience investment that includes periodically identifying and prioritizing projects. Such an approach could supplement individual agency climate resilience efforts and help target federal resources toward high-priority projects. Based on its review of prior GAO work, relevant reports, and stakeholder interviews, GAO found six key steps that provide an opportunity for the federal government to strategically identify and prioritize climate resilience projects for investment. These are (1) defining the strategic goals of the climate resilience investment effort and how the effort will be carried out, (2) identifying and assessing high-risk areas for targeted resilience investment, (3) identifying potential project ideas, (4) prioritizing projects, (5) implementing high-priority projects, and (6) monitoring projects and climate risks. GAO also identified two options—each with strengths and limitations—for focusing federal funding on high-priority climate resilience projects. The options are (1) coordinating funding provided through multiple existing programs with varied purposes and (2) creating a new federal funding source specifically for investment in climate resilience. In addition, GAO identified opportunities to increase the impact of federal funding options on climate resilience, including ensuring adequate and consistent funding and encouraging nonfederal investment in climate resilience. What GAO Recommends Congress should consider establishing a federal organizational arrangement to periodically identify and prioritize climate resilience projects for federal investment. Such an arrangement could be designed using the six key steps for prioritizing climate resilience investments and the opportunities to increase the climate resilience impact of federal funding options that GAO identified in its October 2019 report.
gao_GAO-19-467
gao_GAO-19-467_0
Background As Cigarette Sales Have Declined, Sales of Other Tobacco Products Have Increased as a Percentage of the Smoking Tobacco Market As sales of cigarettes generally decreased over the past 10 years, combined sales of roll-your-own tobacco, pipe tobacco, small cigars, and large cigars have increased as a percentage of the total market. Figure 1 shows a sample of these smoking tobacco products. As shown in figure 2, while the cigarette share of the smoking tobacco market has decreased, cigarette sales continue to dominate the market for smoking tobacco products. Cigarette sales fell from 350.3 billion cigarettes in fiscal year 2008 to 236.9 billion cigarettes in fiscal year 2018, and its percentage of the smoking tobacco market declined from 93.5 percent to 87.3 percent. During this same period, the combined sales of roll-your-own tobacco, pipe tobacco, small cigars, and large cigars increased from the equivalent of 24.5 billion sticks in fiscal year 2008 to 34.6 billion sticks in fiscal year 2018, an increase from 6.5 percent to 12.8 percent of the total market for smoking tobacco products. Although electronic cigarettes are growing in popularity among U.S. youth according to the FDA, they are not included in the sales data on smoking tobacco products represented in figure 2. Electronic cigarettes are not currently taxed under the Internal Revenue Code as a tobacco product. Accordingly, corresponding data on electronic cigarettes sales are not available. Federal Excise Tax Rates on Tobacco Products Were Last Increased in 2009 under CHIPRA Federal excise tax rates on different tobacco products are calculated in different ways. Cigarettes and small cigars are taxed on a per unit basis— the number of sticks. Roll-your-own and pipe tobacco are taxed by weight. Before CHIPRA, the federal excise tax rate on cigarettes was higher than the rates on roll-your-own tobacco, pipe tobacco, and small cigars. In 2009, Congress passed CHIPRA and significantly raised the tax rates on these four products, equalizing the rates for cigarettes, roll-your- own tobacco, and small cigars. CHIPRA also increased the tax rate for pipe tobacco, among other products, but not to the level of the other three products mentioned. Table 1 shows the increases in federal excise tax rates under CHIPRA for these four products. As shown in figure 3, CHIPRA equalized—on a comparable per stick basis—federal excise tax rates for cigarettes, roll-your-own tobacco, and small cigars but not for pipe tobacco. As a result, of the three cigarette products shown previously in figure 1, the cigarette made with pipe tobacco (marked as number 2) is taxed at a much lower rate than either the factory-made cigarette (number 3) or the cigarette made with roll- your-own tobacco (number 1). CHIPRA also increased the federal excise tax rate on large cigars. Large cigars are unique among tobacco products in that the tax rate is ad valorem—calculated as a percentage of the manufacturer’s or importer’s sale price—up to a maximum tax (currently $402.60) per thousand sticks. CHIPRA increased the ad valorem rate for large cigars from 20.72 percent to 52.75 percent of the manufacturer’s or importer’s sale price, up to a maximum of $402.60 per thousand sticks (see table 2). To reduce federal excise taxes, manufacturers of inexpensive small cigars have an incentive to modify their product to qualify for the lower- taxed large cigar category by adding weight. For example, manufacturers of cigars with a sale price of $50 per thousand would pay $26.38 per thousand in federal excise taxes if the cigar qualified as large cigars compared to $50.33 per thousand if they qualified as small cigars. Consequently, a manufacturer of small cigars would experience a tax savings of $23.95 per thousand if it changed the product to qualify as a large cigar. In figure 1, although the small cigar (marked as number 4) and the large cigar (number 5) are similar in appearance, they are likely taxed at significantly different rates, depending on the price of the large cigar. Treasury Administers and Collects Federal Excise Taxes on Domestic Tobacco Products Domestic manufacturers and importers of tobacco products must obtain a permit from TTB before engaging in business. TTB collects federal excise taxes on domestic tobacco products when these products leave manufacturing facilities. CBP, within the Department of Homeland Security, collects the federal excise taxes on imported tobacco products after those products are released from Customs custody. Tobacco products—including roll-your-own tobacco, pipe tobacco, small cigars, and large cigars—are broadly defined in the Internal Revenue Code (see table 3). Roll-your-own tobacco and pipe tobacco are defined by such factors as the use for which the product is suited and how the product is offered for sale, as indicated by its appearance, type, packaging, and labeling. These definitions do not specify any physical characteristics that would differentiate pipe tobacco from roll-your-own tobacco, and TTB faces challenges in distinguishing these two products for tax collection purposes. We reported in 2014 that according to government officials, representatives of nongovernmental organizations, and industry, the new pipe tobacco products introduced after CHIPRA had minimal, if any, differences from roll-your-own tobacco products. We further reported in 2014 that TTB took rulemaking actions intended to more clearly differentiate the two products. As of May 2019, TTB was still finalizing its regulatory approach for distinguishing between the two products. According to TTB officials, TTB continues to face the challenges inherent in identifying specific physical characteristics that clearly distinguish pipe tobacco from roll-your-own tobacco. TTB officials have discussed the complexity of administering the federal excise tax on large cigars because it is calculated as a percentage of the manufacturer’s or importer’s sale price, up to a maximum tax per thousand sticks. We reported in 2014 that TTB’s efforts to monitor and enforce tax payments on large cigars became more complex after CHIPRA as more manufacturers and importers determined their tax liability based on the sale price per stick rather than simply paying the set maximum tax. In addition, we reported that according to TTB officials some large cigar manufacturers and importers began to restructure their market transactions to lower the sale price for large cigars and obtain tax savings based on a lower ad valorem rate. According to TTB officials, some manufacturers and importers, for example, were “layering” sales transactions by including an additional transaction at a low price before the sale to the wholesaler or distributor and using this low initial price to calculate the tax. According to TTB officials, such transactions are conducted with an intermediary that may have a special contract arrangement with the manufacturer or importer. The intermediary may then add a large markup to the subsequent sale price to the wholesaler or distributor. This added transaction effectively lowers the manufacturer’s or importer’s sale price and thus reduces the taxes collected. TTB officials stated that these types of transactions have continued since 2014, and that taking enforcement actions to counter them is challenging and resource intensive due to their complexity. TTB officials also noted that these activities can range from legal tax avoidance to illegal tax evasion, requiring a case-specific analysis of each transaction. Large Tax Disparities among Similar Tobacco Products Led to Immediate Market Shifts to Avoid Higher Taxes Large tax disparities among similar tobacco products created opportunities for tax avoidance and led to immediate market shifts to the lower-taxed products. Specifically, since CHIPRA took effect in 2009, pipe tobacco consumption increased significantly—steeply at first and then leveling off. Over the same period, roll-your-own tobacco consumption fell sharply and then more gradually declined. Similarly, large cigar consumption rose sharply after CHIPRA took effect, while sales of small cigars dramatically decreased and now make up very little of the combined market share for cigars. Roll-Your-Own Market Shifted to Pipe Tobacco following CHIPRA Following CHIPRA’s passage, pipe tobacco sales rose steeply, peaking in July 2013 and leveling off since then (see fig. 4). Pipe tobacco sales grew from 5.2 million pounds in fiscal year 2008, the fiscal year before CHIPRA came into effect, to 40.7 million pounds in fiscal year 2018. Pipe tobacco sales reached a high in fiscal year 2013, with consumption exceeding 42.4 million pounds and spiking in July 2013 for a monthly high of over 4.9 million pounds. After this spike, the pipe tobacco market leveled off with monthly sales fluctuating from 2.8 million to 4.1 million pounds. Despite this leveling off, pipe tobacco’s share of the combined roll-your- own and pipe tobacco market continued to increase, reaching approximately 95 percent in fiscal year 2018, which is the highest it had been since CHIPRA took effect. Figure 4 also shows that as pipe tobacco sales increased significantly after the passage of CHIPRA, roll-your-own tobacco experienced an immediate drop in sales. Annual sales of roll-your-own tobacco dropped from 17.0 million pounds in fiscal year 2009 to 6.4 million pounds in fiscal year 2010, before declining further to 2.2 million pounds in fiscal year 2018. The lowest annual sales for roll-your-own tobacco since CHIPRA occurred in fiscal year 2018. Over the 11 fiscal years from 2008 through 2018, roll-your-own tobacco’s share of the combined roll-your-own and pipe tobacco market decreased from approximately 78 percent to approximately 5 percent. Figure 5 shows that the overall combined sales of pipe tobacco and roll- your-own tobacco were higher after CHIPRA than before CHIPRA. However, the growth rate declined from 0.69 percent before CHIPRA to 0.33 percent after CHIPRA took effect. In April 2012, we reported that the rise in pipe tobacco sales after CHIPRA coincided with the growing availability of commercial roll-your- own machines that enabled customers to produce a carton of roll-your- own cigarettes with pipe tobacco in less than 10 minutes. Not only were customers able to save money through lower taxes on pipe tobacco, but the commercial roll-your-own machines also provided significant time savings compared with rolling cigarettes by hand. The market shift from roll-your-own to pipe tobacco has persisted in recent years despite a change in the legal status of businesses making commercial roll-your-own machines available to consumers, resulting in these machines being less readily available. Following the growth in the availability of commercial roll-your-own machines, Congress passed a law in July 2012 that included a provision adding “any person who for commercial purposes makes available for consumer use…a machine capable of making cigarettes, cigars, or other tobacco products” to the definition of “manufacturer of tobacco products” for tax purposes. As a result, businesses meeting this definition faced increased tax liability and regulatory requirements. According to TTB officials and industry observers, the number of businesses making commercial roll-your-own machines available to customers declined after the 2012 law’s passage. Nevertheless, combined annual sales of pipe tobacco and roll-your-own tobacco generally have not decreased since the 2012 law was passed. Besides its lower federal excise tax, which creates financial incentives, pipe tobacco has other advantages over roll-your-own tobacco that may also contribute to its sustaining an overwhelming share of the combined roll-your-own and pipe tobacco market. For example, according to the Food and Drug Administration (FDA), pipe tobacco is not covered by the Federal Food, Drug, and Cosmetic Act restriction, such as the ban on flavor additives, imposed on roll-your-own tobacco and cigarettes. Also, according to FDA, pipe tobacco does not currently have the warning label requirements that are imposed on roll-your-own tobacco and cigarettes. Finally, while makers of roll-your-own tobacco are required to make payments under the Tobacco Master Settlement Agreement, makers of pipe tobacco do not make these payments. This increases the incentive for roll-your-own tobacco users to switch to the cheaper pipe tobacco. Small Cigar Market Shifted to Large Cigars after CHIPRA After CHIPRA, sales of lower-taxed large cigars rose sharply, while sales of small cigars plunged (see fig. 6). From fiscal year 2008 through fiscal year 2018, annual sales of large cigars increased from 5.8 billion sticks to 13.1 billion sticks. This increase included a significant spike in demand immediately after CHIPRA’s passage in 2009. The increase in annual sales then largely leveled off after fiscal year 2010, with sales ranging between 11.9 and 13.2 billion large cigars. As a share of the combined market for small and large cigars, large cigar sales have continued to expand. Large cigar sales increased from approximately 50 percent of the combined market in fiscal year 2008 (before CHIPRA) to approximately 92 percent in fiscal year 2010 and reached approximately 97 percent by the end of fiscal year 2018. Figure 6 also shows that just as large cigar sales increased immediately following CHIPRA, sales of small cigars declined substantially. Annual small cigar sales dropped from 3.6 billion to 1.0 billion sticks between fiscal years 2009 and 2010, and declined further to 0.4 billion sticks by fiscal year 2018. Over the 10-year period between 2008 and 2018, the market share held by small cigars decreased from a high of approximately 50 percent of the combined small and large cigar market in 2008 to approximately 3 percent in fiscal year 2018. Figure 7 shows that the overall combined sales of small and large cigars were higher after CHIPRA than before CHIPRA, although the growth rate for small and large cigars leveled off after CHIPRA took effect in 2009. The growth rate before CHIPRA was 0.78 percent and the growth rate after CHIPRA was 0.03 percent. The makeup of large cigar sales also changed after CHIPRA, with imports replacing domestic cigars as the main contributor to the large cigar market (see fig. 8). When CHIPRA took effect in April 2009, domestic large cigars made up 93.5 percent of the large cigar market. After CHIPRA, the large cigar market began to shift in favor of imports and, by February 2017, imported large cigars consistently became the majority product in the large cigar market. As of September 2018, imported cigars made up 65.6 percent of the large cigar market compared to 93.5 percent held by domestic large cigars in April 2009. Market Shifts Continue to Reduce Federal Revenue Market shifts to avoid increased tobacco taxes following CHIPRA have continued to reduce federal revenue. We estimate that federal revenue losses due to market shifts from roll-your-own to pipe tobacco and from small to large cigars range from approximately $2.5 to $3.9 billion from April 2009 through September 2018, depending on assumptions about how consumers would respond to a tax increase. In contrast, total tax revenue collected for smoking tobacco products, including cigarettes, amounted to about $138 billion over the same time period. We previously reported in 2014 on the estimated federal revenue losses resulting from these market shifts, reporting that estimated federal revenue losses due to the market shifts from roll-your-own tobacco to pipe tobacco and from small to large cigars ranged from approximately $2.6 billion to $3.7 billion from April 2009 through February 2014. Estimated tax revenue losses in the combined roll-your-own and pipe tobacco markets. TTB and CBP collected approximately $2.0 billion in federal excise tax revenue from domestic and imported roll- your-own and pipe tobacco from April 2009 through September 2018. We estimate that during the same period the market shift from roll- your-own to pipe tobacco reduced federal excise tax revenue by an amount ranging from $499 million to $1.2 billion (see fig. 9). Estimated tax revenue losses in the combined small and large cigar markets. TTB and CBP collected about $7.2 billion in federal excise tax revenue from domestic and imported small and large cigars from April 2009 through September 2018. We estimate that during the same period the market shift from small to large cigars reduced federal excise tax revenue by an amount ranging from $2.0 billion to $2.7 billion (see fig. 10). Eliminating Tax Disparities between Roll-Your-Own and Pipe Tobacco Would Likely Increase Federal Revenue, While the Effect on Small and Large Cigars Is Unknown Federal revenue would likely increase if Congress were to equalize the tax rate for pipe tobacco with the rates currently in effect for roll-your-own tobacco and cigarettes. We estimate that federal revenue would increase by a total of approximately $1.3 billion from fiscal year 2019 through fiscal year 2023 if the pipe tobacco tax rate were equalized to the higher rate for roll-your-own tobacco and cigarettes. While equalizing federal excise taxes on small and large cigars should raise revenue based on past experience, the specific revenue effect is unknown because the data needed for conducting that analysis are not available. See appendix 1 for information on our methodology for estimating the effect on tobacco tax revenue if Congress were to eliminate current tax disparities among similar tobacco products and our assumptions about price sensitivity and other factors. Estimated Revenue Would Increase If Congress Were to Equalize Federal Tax Rates on Roll-Your-Own and Pipe Tobacco We estimate that under current tax rates TTB and CBP would collect approximately $825 million in federal excise tax revenue from domestic and imported roll-your-own and pipe tobacco from October 2018 through September 2023. If Congress were to increase the federal excise tax rate on pipe tobacco of $2.83 per pound to the higher roll-your-own tobacco rate of $24.78 per pound, we estimate that $1.3 billion in additional federal revenue would be collected for these two products for the same time period (see fig. 11). Estimated Revenue Effect of Equalizing Federal Tax Rates on Small and Large Cigars Is Unknown Because Data Are Not Available The revenue effect if Congress were to equalize federal excise tax rates on small and large cigars is unknown because data for conducting this analysis are not available. Unlike roll-your-own and pipe tobacco, which are each taxed by weight, the tax rate on large cigars is based on an ad valorem rate and the tax rate on small cigars is based on number of sticks. Legislative proposals in the 115th and 116th Congress for changing the federal excise tax on large cigars have included replacing the ad valorem rate with a rate based on weight, together with a minimum tax per cigar. Shifting from an ad valorem tax to one based on weight could effectively equalize small and large cigar tax rates and address challenges that TTB currently faces in administering the large cigar tax; however, developing a reliable estimate of the revenue effect of such a change is not possible because the data needed on large cigars to conduct this analysis are not available. Specifically, data are not available on (1) large cigar weights or (2) the distribution of large cigars for which the federal excise tax now being paid is above or below the current rate for small cigars. These data on large cigars are not collected by TTB because such data are not needed to administer and collect large cigar taxes under the current tax structure. In the absence of these data, it is not possible to reliably calculate the potential effect on tax revenue of a counterfactual scenario for equalizing small and large cigar federal excise taxes. See appendix I for more information on the additional data needed for developing an estimate of the revenue effect of equalizing the federal excise tax rate on small and large cigars. As previously discussed, the number of imported large cigars has increased in recent years and the ratio of imported to domestic large cigars in the U.S. market has shifted toward imports. As part of this trend, there has also been an increase in the proportion of imported large cigars that are taxed at a lower rate than the small cigar tax rate of 5.03 cents per stick. From fiscal years 2013 through 2018, 72 percent of imported large cigars were taxed at a rate less than 5.03 cents per stick. As a result of this increase in inexpensive imported large cigars, annual large cigar revenue has begun to decline. Large cigar revenue has declined from a monthly average of $71.5 million over the period from April 2009 to December 2012 to a monthly average of $52.9 million over the period from January 2013 through September 2018. Large cigars account for approximately 95 percent of combined small and large cigar revenue. Figure 12 shows actual combined small and large cigar federal excise tax revenue from fiscal year 2008 through fiscal year 2018. The combined average monthly federal revenue for small and large cigars increased significantly after CHIPRA went into effect in 2009, from $21.3 million in fiscal year 2008 to $72.8 million in fiscal year 2010, and remains above the pre-CHIPRA level (see fig. 12). Based on this experience, if Congress were to equalize federal excise taxes through a tax increase for large cigars, revenue should increase. However, the magnitude of the revenue effect of equalizing taxes on small and large cigars is unknown because the data for conducting this analysis are not available. Agency Comments We provided a draft of this report for comments to the Departments of the Treasury, Homeland Security, and Labor. The Department of the Treasury generally concurred with the report’s findings and provided technical comments, which we have addressed as appropriate. The Department of Homeland Security also provided technical comments, which we have addressed as appropriate. The Department of Labor did not provide comments on the report. We are sending copies of this report to the appropriate congressional committees and the Secretary of the Treasury, the Secretary of Homeland Security, the Secretary of Labor, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope and Methodology Our objectives were to examine (1) market shifts among smoking tobacco products since the Children’s Health Insurance Program Reauthorization Act (CHIPRA) of 2009 went into effect, (2) the estimated effects on federal revenue if the market shifts following CHIPRA had not occurred, and (3) what is known about the effects on revenue if Congress were to eliminate current tax disparities between smoking tobacco products. Our analysis focuses on roll-your-own tobacco, pipe tobacco, small cigars, and large cigars. It covers sales and federal excise tax payments for these products from October 2001 through September 2018. To address the objectives in this study, we reviewed documents and interviewed agency officials from the Department of the Treasury’s Alcohol and Tobacco Tax and Trade Bureau (TTB), the Department of Homeland Security’s U.S. Customs and Border Protection (CBP), and the Department of Labor’s Bureau of Labor Statistics (BLS). We also interviewed representatives from other organizations working on tobacco and taxation issues to obtain background information on markets, industry, and consumption practices and trends for tobacco products. For objective one, we identified market shifts among smoking tobacco products by analyzing TTB domestic removals data and CBP imports data to identify sales trends across the different domestic and imported tobacco products before and after CHIPRA took effect. For objectives two and three, we estimated the federal revenue effects of differences in federal excise tax rates for tobacco products by analyzing TTB’s and CBP’s revenue data and BLS price data for smoking tobacco products. We estimated what the effect on tax revenue collection would have been if the sales trends for roll-your-own and pipe tobacco and for small and large cigars had not been affected by substitution between the products but had been affected by the increase in price due to the tax—in other words, if the market shifts resulting from the substitution of higher-taxed products with lower-taxed products had not occurred. In this report, we refer to this estimated effect on federal tax revenue collection as revenue losses. In addition, we analyzed what is known about the effects on federal revenue if Congress were to eliminate current tax disparities between smoking tobacco products. We assumed that the pipe tobacco federal excise tax was increased and equalized to the level of the roll- your-own tobacco tax as of October 1, 2018, and we calculated the cumulative revenue differential for five fiscal years through September 2023. We assessed the reliability of the data for these objectives by performing data checks for inconsistency errors and completeness and by interviewing relevant officials. We determined that the data used in this report were sufficiently reliable for our purposes. Our estimate of federal revenue losses resulting from differences in federal excise tax rates among smoking tobacco products includes combined tax revenue losses for the roll-your-own and pipe tobacco markets as well as the small and large cigar markets. Our analysis takes into account the expected fall in quantity demanded due to the price increases resulting from the higher federal excise tax rates that CHIPRA imposed on these smoking tobacco products, holding other variables constant. To calculate the range of federal revenue losses, we included high and low estimates based on assumptions about the effect of a price increase on projected sales. Economic theory shows that when the price of a product increases, the demand for the product will adjust downward, decreasing at an estimated rate based on demand for the product, i.e., price elasticity. On the basis of our prior work estimating revenue losses from tobacco taxes and a literature review, we determined that the price elasticity for the smoking tobacco products ranges from -0.6 to -0.3, respectively, for the low and high revenue estimates. Our projections also take into account the historic sales trends for these products, the sales trend of cigarettes after CHIPRA and the tax component of the price. We developed our revenue loss estimate by comparing the actual tobacco tax revenues collected by TTB with a counterfactual scenario. The counterfactual model draws from a model used by Dr. Frank Chaloupka, an economist and a leading scholar who has investigated the effect of prices and taxes on tobacco consumption in numerous publications. In particular, we based our methodology on Dr. Chaloupka’s model calculating the effect of raising cigarette taxes in the State of Illinois. This methodology projects the effect of a future tax increase based on the historic sales trend, the amount of the tax, and the price elasticity of demand. Under this model, when a tax increase is enacted, demand for the product is expected to decline based on the price elasticity and the effect on prices. Following this initial decline, demand for the product is expected to continue at the rate of its historic sales trend. We updated this model by assuming that tobacco products that incur a tax increase to match the tax rate on cigarettes will follow the cigarette sales trend after CHIPRA rather than the product’s historic trend. For example, the roll-your-own tax rate increased under CHIPRA to match the rate on cigarettes because it was viewed as a substitute for cigarettes. Projecting the pre-CHIPRA sales trend forward based on historical data could provide a misleading result as it includes the additional consumption from substitution. Under our assumption, the pre- CHIPRA sales trend is adjusted downward based on the actual sales trend for cigarettes, which has generally declined in recent years. The BLS price data used in our analysis are a subset of the data used for calculating the Consumer Price Index for tobacco products. The BLS data contain retail price information collected each month throughout the United States. These price data only include excise taxes from federal, state and local governments and exclude shipping, handling, sales tax, and fuel surcharges. Because the BLS data are at the retail level, there is an expected markup in addition to the charges mentioned above. To simplify the model, we assumed that the markup remains constant after CHIPRA was passed. We calculated an average price for the year before CHIPRA was enacted, and we calculated the post-CHIPRA price by adding the new tax to the pre-CHIPRA price. Therefore, we estimated only the effect of CHIPRA on taxes. We calculated large cigar revenues and developed a revenue loss estimate for large cigars using assumptions based on available data. As discussed earlier in the report, small cigars are currently taxed at $50.33 per thousand sticks, while, large cigars are taxed at 52.75 percent of the manufacturer’s sale price, up to a cap of $402.60 per thousand sticks. TTB collects revenue data for all cigars, but does not collect separate revenue data for small and large cigars. We calculated large cigar revenues by subtracting small cigar revenue from total cigar revenue. We calculated small cigar revenues by multiplying the number of sticks reported to TTB in each month by the tax rate. After calculating large cigar revenue, we estimated the average tax paid per cigar by dividing the large cigar revenue by the number of sticks for each month and calculating the average price. From March 2007 through March 2009, the average large cigar tax collected was 4.2 cents per stick. CHIPRA raised this cap from 4.9 cents to approximately 40 cents per stick. We calculated that the average taxable price for large cigars before CHIPRA was 20.12 cents. Since the tax is based on the price rate, the percentage change in price due to taxation is based on the percentage change of the price, plus tax, before and after CHIPRA. To calculate the potential effect on federal tax revenue from raising the tax rate for pipe tobacco to match the roll-your-own tax rate, we followed the model discussed above, but we adjusted the pipe tobacco tax to the roll-your-own rate of $24.78 per pound. The model assumes that taxes would have been equalized as of October 1, 2018, and calculates the cumulative revenue differential for 5 fiscal years through September 2023. The model takes into account the additional reduction in consumption due to the tax increase and estimates potential revenue differentials. A price elasticity of -0.8 is assumed to provide a conservative scenario. Our model assumes that there are no other smoking tobacco products that are close substitutes, an assumption we also made in our previous models; the higher elasticity of -0.8 accounts for a drop in consumption altogether. The magnitude is based on a literature review and interviews with the Joint Committee on Taxation. After the drop in demand due to the tax increase, demand is projected linearly using the most recent 5-year historic trend. The projection of actual sales is calculated by applying the same historic trend to the actual sales of roll-your-own and pipe tobacco. Actual revenue is calculated by multiplying the tax rate to the projected sales. An analysis projecting the impact of equal tax rates for small and large cigars requires a different set of assumptions. The reliability of any such model would be questionable, particularly for large cigars because the tax rate on them is calculated as a percentage of the price. Compared with determining the tax on all other tobacco products, according to TTB, determining the tax on large cigars is extremely complex. We concluded that modeling hypothetical consumption trends for smoking tobacco products after equalizing tax rates on small and large cigars would require a complex set of assumptions not sufficiently grounded in reliable data. These assumptions include the price distribution of large cigars since CHIPRA was enacted and assumptions about the proportion of the large cigar market captured by imported large cigars if large cigars were taxed similarly to small cigars. Rather than calculating a tax revenue estimate using assumptions not grounded in reliable data, we present actual cigar revenue and show how the large cigar market has changed from domestic cigars to cheaper imported cigars over time. While it is possible to develop a tax equalization model based only on applying a minimum tax rate per large cigar of 5.03 cents per stick—to ensure large cigars are not taxed below the small cigar tax rate of 5.03 cents per stick—this approach would not produce a reliable estimate of the full revenue effect of legislative proposals to equalize small and large cigar taxes. Applying only a minimum tax would have the effect of underestimating the federal excise tax collected from more expensive cigars because this would reduce the revenue estimates on large cigars that are currently taxed at between 5.03 cents per stick and the maximum rate of 40 cents per stick. In addition, the distribution of domestic large cigar sales that are taxed below the small cigar tax rate is unknown because TTB data on domestic large cigar sales are collected by manufacturers and reported monthly as a quantity aggregate. Without incorporating this information on the distribution of large cigars paying above and below the small cigar tax rate of 5.03 cents per cigar, an estimate of the revenue effects of equalizing small and large cigars would understate the potential revenue that could have been collected from large cigars. We conducted this performance audit from September 2018 to June 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Christine Broderick (Assistant Director), Jeremy Latimer (Analyst-in-Charge), Pedro Almoguera, David Dayton, Mark Dowling, Christopher Keblitis, and Ethan Kennedy made key contributions to this report.
Why GAO Did This Study In 2009, CHIPRA increased and equalized federal excise tax rates for cigarettes, roll-your-own tobacco, and small cigars but did not equalize tax rates for pipe tobacco and large cigars—products that can be cigarette substitutes. GAO reported in 2012 and 2014 on the estimated federal revenue losses due to the market shifts from roll-your-own to pipe tobacco and from small to large cigars. This report updates GAO's prior products by examining (1) the market shifts among smoking tobacco products since CHIPRA, (2) the estimated effects on federal revenue if the market shifts had not occurred, and (3) what is known about the revenue effects if Congress were to eliminate current tax disparities between smoking tobacco products. GAO analyzed data from the Department of the Treasury and U.S. Customs and Border Protection to identify sales trends for domestic and imported smoking tobacco products, to estimate the effect on tax collection if market substitutions had not occurred, and to model the effects of equalizing tax rates for smoking tobacco products. What GAO Found Large federal excise tax disparities among similar tobacco products after enactment of the Children's Health Insurance Program Reauthorization Act (CHIPRA) of 2009 led to immediate market shifts (see figure). Specifically, CHIPRA created tax disparities between roll-your-own and pipe tobacco and between small and large cigars, creating opportunities for tax avoidance and leading manufacturers and consumers to shift to the lower-taxed products. Following the market shifts after CHIPRA, the lower-taxed products have sustained their dominant position in their respective markets. Market shifts to avoid increased tobacco taxes following CHIPRA have continued to reduce federal revenue. GAO estimates that federal revenue losses due to market shifts from roll-your-own to pipe tobacco and from small to large cigars range from a total of about $2.5 to $3.9 billion from April 2009 through September 2018, depending on assumptions about how consumers would respond to a tax increase. Federal revenue would likely increase if Congress were to equalize the tax rate for pipe tobacco with the rates currently in effect for roll-your-own tobacco and cigarettes. GAO estimates that federal revenue would increase by a total of approximately $1.3 billion from fiscal year 2019 through fiscal year 2023 if the pipe tobacco tax rate were equalized with the higher rate for roll-your-own tobacco and cigarettes. While equalizing federal excise taxes on small and large cigars should raise revenue based on past experience, the specific revenue effect is unknown because data for conducting this analysis are not available. These data are not collected by the Department of the Treasury because such data are not needed to administer and collect large cigar taxes under the current tax structure. What GAO Recommends In its 2012 report, GAO recommended Congress consider equalizing tax rates on roll-your-own and pipe tobacco and consider options for reducing tax avoidance due to tax differentials between small and large cigars. Treasury generally agreed with GAO's conclusions and observations. As of May 2019, Congress had not passed legislation to reduce or eliminate tax differentials between smoking tobacco products. Treasury also generally agreed with this report's findings.
gao_GAO-20-330
gao_GAO-20-330_0
Background Federal Roles and Responsibilities Within DHS, ICE is responsible for immigration enforcement and removal operations. This entails, among other duties, identifying, arresting, and detaining foreign nationals for the administrative purpose of facilitating their appearance during removal proceedings, and for processing, and preparing them for removal from the United States, among other things. As such, ICE manages the nation’s immigration detention system, which houses foreign nationals detained while their immigration cases are pending or after being ordered removed from the country. ICE generally has broad discretion in determining whether to detain removable foreign nationals or release them under various conditions, unless the law specifies that detention is mandatory. Additionally, foreign nationals arriving at the U.S. border or a port of entry without valid entry documents and placed into expedited removal proceedings are required to be detained while awaiting an inadmissibility determination and, as applicable, any subsequent credible fear decision. Except in cases where detention is mandatory, ICE may release an individual pending the outcome of removal proceedings and has various release options for doing so, including the Alternatives to Detention program. While foreign nationals are detained, ICE is responsible for providing accommodations and medical care to individuals in detention with special needs or vulnerabilities, such as those who are pregnant. ICE’s December 2017 memo, Identification and Monitoring of Pregnant Detainees, sets forth policy and procedures to ensure pregnant detainees in ICE custody are identified, monitored, tracked, and housed in an appropriate facility. CBP is a component within DHS and the lead federal agency charged with a dual mission of facilitating the flow of legitimate travel and trade at our nation’s borders while also keeping terrorists and their weapons, criminals and their contraband, and inadmissible foreign nationals out of the country. CBP temporarily holds individuals to complete general processing and determine the appropriate course of action, such as transferring them to a court, jail, prison, or another agency; relocating them into ICE detention facilities; removing them from the country; or releasing them—as CBP has discretion to release individuals with a notice to appear in court. Within CBP, individuals, including pregnant women, could be held by Border Patrol or OFO. ICE Detention Facility Types, Detention Standards, and Medical Care ICE detains individuals in both under-72-hour and over-72-hour detention facilities. Detention facilities may be for male only, female only, or both; and some are specifically reserved for family units (also known as family residential centers). ICE uses various types of detention facilities to hold detainees for more than 72-hours. These include ICE owned and operated detention facilities, also known as service processing centers, as well as facilities that ICE oversees but the day-to-day operations are generally run by another entity, as follows: contract detention facilities owned and operated by a private company under direct ICE contract that exclusively houses ICE detainees, facilities owned by state or local government or private entity, operating under an intergovernmental service agreement (IGSA), that exclusively houses ICE detainees or houses ICE detainees and other confined populations, and facilities owned by state or local government or private entity, operating under an intergovernmental agreement (IGA), or contract, with U.S. Marshals Service (USMS), that exclusively houses ICE detainees or houses ICE detainees and other confined populations. ICE detention facilities are generally required to adhere to one of four sets of detention standards. The detention standards vary depending on the contract or agreement. As we have previously reported, ICE’s detention standards are based on the American Correctional Association’s expected practices and have been updated when ICE identified issues of heightened concern or gaps in agency procedures. Some detention facilities used by ICE are not obligated to adhere to ICE’s detention standards—because, for example, ICE is a rider on the contract and the facility may be held to other standards. Further, on-site medical care may be directly provided by ICE Health Service Corps (IHSC) or other entities at these detention facilities. IHSC provides direct on-site medical services in 20 ICE facilities authorized to house detainees for over 72 hours. In addition to any applicable detention standards, IHSC staff must also adhere to IHSC policies. At detention facilities that are not staffed with IHSC personnel (non-IHSC facilities), medical care is provided onsite by local government staff or private contractors and overseen by IHSC. ICE inspects “authorized” detention facilities against detention standards and any applicable IHSC policies. Table 1 details information on each of the detention standards, the number of authorized facilities contractually obligated to each standard, the percent of the average daily population at each, and the presence of IHSC staff. CBP Facilities, Standards, and Medical Care CBP operates all of its short-term holding facilities and hold rooms, and does not utilize contract services for the management of individuals in CBP custody. In October 2015, CBP issued its first nationwide standards, which govern CBP’s interaction with detained individuals. The standards include requirements regarding transport, escort, detention, and search provisions, as well as care for “at-risk individuals”, which includes pregnant women. Given that CBP short-term facilities are intended to hold individuals for no more than 72 hours, CBP historically did not have on-site medical professionals at most of its facilities. However, as a result of surges in unaccompanied minors and families crossing the border, CBP issued a directive in January 2019 titled Interim Enhanced Medical Efforts (January 2019). According to the directive, enhanced medical services were needed to address growing public health concerns and mitigate risk to, and improve care for, individuals in CBP custody along the southwest border. The January 2019 directive was superseded by a December 2019 directive, Enhanced Medical Support Efforts, which also calls for medical support to mitigate risk to, and sustain enhanced medical efforts for persons in CBP custody along the southwest border. A related memo issued by the CBP Commissioner, titled CBP’s Expansion of Existing Medical Services Contracts and Expedited Deployment of Additional Contracted Medical Services Personnel to the Southwest Border, called for the expansion of CBP’s medical services contract to numerous Border Patrol facilities and OFO ports of entry along the southwest border. This effort is discussed later in our report. DHS Had Over 4,600 Detentions of Pregnant Women from 2016 through 2018 for Different Lengths of Time and In Varying Types of Facilities About Two-thirds of ICE’s Detentions of Pregnant Women Were for a Week or Less Number of pregnant women detentions. From calendar year 2016 through 2018, ICE had over 4,600 detentions of pregnant women. The number of detentions decreased from 1,380 in calendar year 2016 to 1,160 in 2017, and then increased to 2,098 in calendar year 2018 (see figure 1). Of the more than 4,600 detentions of pregnant women from calendar year 2016 through 2018, 32 percent involved pregnant women who were expedited removal cases and were subject to mandatory detention, including those that awaited a credible fear determination. Of the remaining detentions, 49 percent involved pregnant women who were deemed inadmissible and were either awaiting their hearing or an adjudication by an immigration judge, 11 percent involved pregnant women who had a final order of removal, and the remaining detentions (8 percent) involved various other immigration-related circumstances, such as those for which ICE was unable to obtain travel documents. Further, as we reported in December 2019, detentions of non-criminal pregnant women accounted for most of the total detentions of pregnant women each year (ranging from 91 to 97 percent). Length of detention. From calendar years 2016 through 2018, 68 percent of ICE detentions of pregnant women were for 7 days or less, 22 percent for 8 to 30 days, and 10 percent for more than 30 days, as shown in table 2. According to ICE officials, individual circumstances of each case dictate how long they detain a pregnant woman. For example, ICE may determine not to release a pregnant woman from ICE custody if her case is adjudicated quickly, she is ordered removed, and she is cleared to travel by a medical professional. Pregnancy outcomes. Our analysis of ICE data shows that from January 2015 through July 2019, 58 pregnant women in ICE custody experienced a miscarriage, two had an abortion, and one gave birth. Of those, 37 miscarriages and one birth involved women detained at IHSC-staffed facilities at the time of the outcome. Some of these women were in our study population of over 4,600 detentions from calendar years 2016 through 2018, but some were pregnant women detained in 2019. Most ICE Detentions of Pregnant Women Were at IHSC-Staffed Facilities; and Some Data on Gestation of Pregnancy Were Available Detention facility. Our analyses of ICE data found that of the over 4,600 detentions of pregnant women, 78 percent of detentions of pregnant women were initially detained at an IHSC-staffed facility. See appendix II for more details on these data. According to ICE officials, pregnant women may first learn about their pregnancy when a test is performed during their intake into a detention facility. These over 4,600 detentions of pregnant women resulted in approximately 50,300 detention days with more than 66 percent of total detention days spent at IHSC-staffed facilities (see App. II). Some facilities may have a large number of detention days associated with the intake of pregnant women, but may not detain women for a long period of time before releasing or transferring them. For example, at a facility that had one of the largest number of detention days for pregnant women, officials stated that they generally release women once the pregnancy is confirmed. Further, according to ICE officials, ICE will try to transfer pregnant women from their initial detention facility to an IHSC- staffed detention facility or a family residential center—if she is part of a family unit—to ensure they are provided the appropriate accommodations and care. For example, ICE may transfer a pregnant woman awaiting a credible fear determination, as these cases may take longer to process and result in longer detention stays. However, an IHSC official also stated that ICE may detain pregnant women at non-IHSC facilities if ICE believes that the facility can provide the appropriate level of care. Nearly 70 percent of pregnant women’s detention days were spent at an IHSC- staffed facility or a family residential center. Contract detention facilities— both IHSC-staffed and non-IHSC—had the highest average number of days for the detention of pregnant women, as shown in table 3. Gestation of pregnancy. Of the 1,450 detentions of pregnant women for which gestation data were available, 49 percent were for women in their first trimester and 41 percent were for women in their second trimester at the time of intake. Ten percent were for women in their third trimester at the time of intake. Of the detentions involving pregnant women in their third trimester, 75 percent were released within one week or less, 9 percent between 8 and 15 days, and the remaining 16 percent between 16 and 90 days. According to ICE officials, ICE does not detain pregnant woman in their third trimester or a pregnant woman who is unlikely to be removed. However, officials stated that there are instances when it takes ICE time to gather information prior to making a custody determination— such as when it needs to collect criminal conviction data to making a custody determination—which could result in detained pregnant women who are nearing or in their third trimester. This is consistent with what ICE officials told us during our visits to facilities in all four locations—that they generally do not detain pregnant women in their third trimester. However, some explained, that pregnant women in their third trimester may be detained if, for example, they are subject to mandatory detention. CBP Has Data on Pregnant Women in Certain Locations and Has Taken Action that Could Provide Additional Information on Pregnant Women at Other Locations Number of pregnant women. Because of CBP facilities’ short-term nature and limited on-site medical care, CBP does not routinely conduct pregnancy tests of women in their custody, and as such, has limited data on pregnancy. However, ICE data provide insight into CBP encounters with pregnant women. Specifically, our analysis of ICE data from calendar years 2016 through 2018 indicated that nearly 4,400 of ICE’s over 4,600 detentions of pregnant women resulted from CBP arrests. In addition, OFO and Border Patrol collected some data on women in their custody who reported being pregnant. OFO reported holding over 3,900 pregnant women from March 2018 through September 2019 at its ports of entry. At the two sectors where Border Patrol is required to collect such data, Border Patrol reported holding over 750 pregnant women in its facilities from March 2017 through March 2019. As shown in table 4, most of these women reported being in their second or third trimester. These women may have been transferred to ICE and may also be included in the count of pregnant women detained by ICE. In accordance with its January 2019 directive, Interim Enhanced Medical Efforts (January 2019), CBP developed a standardized health interview form that can be used by Border Patrol and OFO. The form includes a question about pregnancy and nursing which could allow for additional data on the number of women in CBP custody that report being pregnant. In December 2019, CBP officials told us that they distributed the form to its field locations. Pregnancy Outcomes. In addition, we reviewed CBP significant incident reports to determine if any pregnant woman encountered or held by CBP had experienced a birth, stillbirth, or miscarriage during calendar year 2015 through February 2019. Our analysis of CBP reports during this time frame found that pregnant women encountered or apprehended by CBP experienced 43 births, three miscarriages, and six stillbirths after being taken to the hospital by CBP. In some of these incidents, Border Patrol agents encountered pregnant women in the field and took them directly to the hospital. In these cases, the pregnant woman was not in a Border Patrol facility directly prior to being taken to the hospital. DHS Policies and Detention Standards that Address the Care of Pregnant Women Vary by Facility Type and Component ICE Policies and Detention Standards Address a Range of Pregnancy-Care Topics that Vary across Facility Types; ICE Has Planned Updates to Address Gaps ICE has policies and detention standards that address a variety of pregnancy-related topics regarding the care of pregnant women, such as pregnancy testing requirements, the use of restraints, and prenatal care. However, we identified certain facility types that did not address all pregnancy-related topics in their policies or detention standards as of December 2019, which ICE is taking actions to address. Appendix III details ICE’s policies and detention standards related to the care of pregnant women in detention. For the purpose of our analysis, the facility type is based on contractually obligated detention standards and the presence of IHSC staff, as these factors dictate which detention standards the facility type is required to adhere to and whether IHSC policies apply. Specifically, we identified 16 topics related to the care of pregnant women and found that in most facility types, ICE had at least one policy or detention standard that addressed many of these topics. Further, we found that if the facility type had policies or detention standards in place regarding a specific topic on the care of pregnant women, at least one of the policies or detention standards generally aligned with recommended guidance from professional associations, NGOs, and federal agencies, (see app. IV for our summary of recommended guidance and associated examples). In addition, we found that from calendar years 2016 through 2018, 64 percent of the detentions of pregnant women were initially detained at the two facility types that had the most policies or detention standards related to each of the pregnancy topics, as of December 2019. Table 5 shows whether policies or detention standards at the various facility types addressed each of the 16 topics, as well as the associated number of detentions of pregnant women—based on the facility in which they were first detained and number of detention days from calendar years 2016 through 2018. ICE is taking numerous actions to address these gaps in its policies and detention standards. For example, according to ICE officials, ICE has updated, or is in the process of updating, its policies and detention standards, and these updates will address many of the gaps that we identified for the pregnancy-related topics. Specifically, ICE revised its 2000 NDS in December 2019 and the 2007 Family Residential Standards are under revision and will be sent to management for review in February 2020. According to IHSC officials, the revised standards will address all of the gaps we identified for 2007 Family Residential Standards and 2000 NDS facility types. Further, IHSC officials stated that they are revising IHSC’s Women’s Health Directive and guidance on care for chronic conditions to include required and recommended vaccines for pregnant women and HIV care, respectively—which will address these gaps at IHSC-staffed facilities. Finally, according to ICE officials, facility types operating under the 2008 PBNDS will be modified to either the 2019 NDS 2019 or 2011 PBNDS. In addition to these updates, in accordance with ICE’s December 2017 memo on Identification and Monitoring of Pregnant Detainees, ICE is to ensure pregnant detainees receive appropriate medical care, and ensure detention facilities are aware of their obligations regarding directives and detention standards that apply to pregnant detainees, among other things. ICE has mechanisms for maintaining oversight of pregnant detainees, as required by policy. Specifically, ICE collects data to monitor the condition of pregnant women in its custody, and according to ICE officials, ensures that the facility can accommodate the woman. In addition, IHSC conducts weekly reviews that focus on high-risk pregnancies, pregnancies in the third trimester, and recent miscarriages. According to an IHSC official, ICE inspections can contribute to IHSC’s understanding of the care of pregnant women at a given facility. Further, although ICE officials stated that it does not have training dedicated to the care of pregnant women in ICE detention specifically, its basic training includes instruction on pregnant detainees. This training is in addition to the professional qualifications of medical staff onsite. CBP Has Policies and Standards Regarding Its Short-Term Care of Pregnant Women CBP has some policies and standards regarding the care of pregnant women held in their short-term facilities. Specifically, CBP has national standards on the transport, escort, detention, and search of detainees, with specific requirements for pregnant women. For example, these standards state that barring exigent circumstances, CBP must not use restraints on pregnant detainees unless they have demonstrated or threatened violent behavior, have a history of criminal or violent activity or an articulable likelihood of escape exists. Further, Border Patrol and OFO have policies that address nutrition and special accommodations for pregnant women. See appendix V for more details on CBP policies related to pregnant women. Although these policies and national standards do not cover the full range of the 16 pregnancy-related care topics we identified, CBP facilities are designed for holding individuals for no more than 72 hours; therefore, CBP’s facilities are not equipped to provide long-term care. Specifically, CBP does not routinely conduct pregnancy testing and historically it did not have on-site medical care at all its facilities. For the policies and standards that CBP does have in place regarding pregnant women, we found that they generally aligned with the recommended guidance from expert and professional organizations. In addition to policies that direct the care of pregnant women, although CBP does not have training dedicated to the care of pregnant women specifically, CBP provides initial and annual refresher training on its national standards for the transport, escort, detention, and search of detainees, which includes requirements for pregnant women. DHS Inspections, Medical Data, and Complaints Offer Insights into the Care Provided to Pregnant Women ICE Inspections Found 79 Percent or Greater Compliance with Most of Its Pregnancy-Related Performance Measures ICE uses various inspections for accessing facilities’ compliance with policies and detention standards—the frequency and focus of which vary. Some inspections also include pregnancy-related performance measures, such as a measure assessing whether a pregnancy test was performed at intake. We reviewed results from the five ICE inspections that address compliance with pregnancy-related policies and detention standards from 2015 through June 2019. These inspections vary in their scope and targeted facility types (see app. I for more details on each of these inspections). These inspections—along with available medical data—offer insight into the care of pregnant women. Two inspections include pregnancy-related performance measures, and compliance with these measures ranged from 53 to 100 percent, with most indicating 79 percent or more compliance. Specifically, one inspection of 129 ICE detention facilities—that included inspections of both IHSC-staffed and non-IHSC facilities—found that compliance was 91 percent or more for each of the six performance measures from December 2016 through March 2019, as shown below. Pregnancy testing performed at intake: 93 percent Pregnancy testing performed prior to x-rays or initiating medication: 100 percent Obstetrician-gynecologist (OB-GYN) consult ordered within 7 days of pregnancy confirmation: 98 percent Patient seen by OB-GYN within 30 days of pregnancy confirmation: Prenatal vitamins prescribed: 100 percent Screened for HIV, sexually transmitted infections, and viral hepatitis: Instances of non-compliance—which were 9 percent or less for each measure—occurred at 16 detention facilities subject to a range of detention standards. Three of these facilities were IHSC-staffed facilities, and 13 were non-IHSC facilities. IHSC documentation indicates that corrective actions are to be implemented to help address inspection findings. See appendix VI for details on the number of records reviewed during the inspections, and the compliance rates. Our analysis of available medical data and interviews with pregnant detainees showed similar findings regarding pregnancy testing at intake. Specifically, from calendar year 2016 through 2018, 92 percent of women in ICE detention facilities received a pregnancy test either the same day as intake to the facility or the next day. This could include women who arrived at a detention facility in the evening and are tested the next day. Of the remaining, 3 percent were tested within 2 to 3 days of intake, 4 percent were tested between 4 days and 2 weeks, and 2 percent were tested after 2 weeks of being detained. According to the 10 pregnant women we interviewed who were detained at 3 ICE detention facilities we visited, all 10 stated that they received a pregnancy test when they arrived at the facility or within the same day. For the second inspection that included performance measures related to the care of pregnant women at IHSC-staffed facilities, overall compliance was 79 percent or more for most of the nine performance measures from fiscal years 2015 through 2018. The following shows the minimum level of overall compliance for all facilities during this timeframe. OB-GYN consult ordered and documented within 7 days of pregnancy Patient seen by OB-GYN within 30 days: 92 percent Prenatal vitamins prescribed: 95 percent Detainee education documented at each encounter: 79 percent Records reviewed by provider after OB appointment: 79 percent Proper diet ordered: 86 percent Appropriate labs ordered if not obtained from OB-GYN: 79 percent Pregnant patient screened for HIV, sexually transmitted infections, and viral hepatitis: 81 percent Hepatitis B vaccine offered: 53 percent However, for one measure—whether the Hepatitis B vaccine was offered—compliance was 53 percent. ICE officials stated that this performance measure reflects recommended practices but is not specifically required by policy or detention standards. According to ICE officials, any issues identified during IHSC inspections are handled locally at the field level through facilities’ quality improvement processes, which includes developing corrective action plans. See appendix VI for the average annual compliance for each measure from fiscal years 2015 through 2018. Our analysis of available medical data for IHSC-staffed facilities and interviews with pregnant detainees and NGOs provides additional perspectives regarding these issues on the care of pregnant women. Specifically, our analysis of ICE data showed 422 detentions in which a pregnant woman was in an IHSC-staffed facility at some point received at least one referral to an OB or OB-GYN between calendar year 2016 and 2018. Based on ICE’s performance measures, pregnant women are to receive an OB-GYN referral within 7 days of pregnancy confirmation— although available data showed that most pregnant women were being released from detention within 7 days. In addition, our analysis of ICE data showed that detentions in which a pregnant woman was in an IHSC- staffed facility at some point were assigned certain special needs, such as a special diet (1,245), lower bunk (113), no heavy lifting (87), and limitations on the use of restraints (316). In addition, all 7 of the pregnant women we spoke with in IHSC-staffed detention facilities said that they received appropriate accommodations, such as a lower bunk and blankets. Similarly, 6 of the 7 pregnant women we spoke with at IHSC-staffed facilities said that they were provided proper nutrition and snacks. The other pregnant woman did not discuss the adequacy of the nutrition she was provided. In addition, both of these two inspections provided insights into OB-GYN referrals and prenatal vitamins that were generally similar to the information we obtained from pregnant detainees at the locations we visited. Specifically, the above inspections indicated 75 to 98 percent compliance on performance measures related to access to OB-GYN care. Eight of the 10 pregnant women we spoke with in ICE detention did not express concerns about access to OB-GYN when asked about the sufficiency of medical care. However, two stated that they would like more timely access to an OB-GYN, and they did not know when their appointments would occur. In addition, representatives from three NGOs stated that they heard concerns about pregnant women not having access to OB-GYN care or prenatal vitamins. Further, the above inspections indicated 95 to 100 percent compliance on performance measures related to prescribing prenatal vitamins, and all 10 of the pregnant women we spoke with in ICE detention said that they were provided prenatal vitamins. Although they did not have specific performance measures, three additional inspections identified 19 findings related to the care of pregnant women. All of the findings occurred at non-IHSC facilities. Three of the 19 findings indicated that medical care was not provided or offered. For example, one pregnant woman was not offered a mental health assessment after reporting that she had a miscarriage at a prior facility. Seven included a recommendation to provide additional medical care, such as pregnancy testing. Four indicated insufficient documentation, such as medical records that were not transferred between facilities, or no documentation that pregnancy testing had occurred. Five indicated that a required policy did not exist or did not specify the required standards of care. All but one of the facilities inspected took corrective actions to address the findings. For example, one inspection found that the facility’s initial health assessment form did not address pregnancy testing. In response, the facility updated its intake screening form to include pregnancy testing. ICE determined that the facility that did not implement corrective actions to address deficiencies identified during the inspection would not be used for the detention of ICE detainees. See appendix VI for additional information on each deficiency, recommendation, and corrective action. Additionally, our review of available data and interviews with pregnant detainees and officials at the locations we visited provided insight into issues related to segregation and the use of restraints—generally finding that these were rarely used. Specifically, our review of ICE data identified two pregnant women who were initially detained from 2015 through 2018, and segregated at some point during their detention—one for 8 days and one for over 4 months. In both cases, ICE reported the reason for the segregation was that the detainee was a threat to the facility’s security. Further, all 10 of the pregnant women we interviewed stated that they had not been segregated, and all the detention officials we interviewed at the four locations we visited stated that they were not aware of any instances of pregnant women being segregated. Similarly, none of the 10 pregnant detainees reported being placed in restraints, and the officials we interviewed at the four locations generally stated that pregnant women are not to be restrained except in extreme circumstances, such as risk of violence or escape—which is consistent with ICE policies and standards. One official said that he was aware of an incident where a pregnant woman was restrained when she attempted to harm herself and her child. In addition, officials from five local organizations or coalitions we spoke with stated that they had not heard concerns about instances of the use of restraints or segregation. CBP Generally Takes Pregnant Women to Offsite Facilities for Care, and Has Plans to Enhance its Medical Support CBP generally relies on offsite care for pregnant women, and as a result, has limited available information on care CBP provided to pregnant women. However, they have efforts underway to enhance its medical support at selected facilities. As previously discussed, CBP facilities are designed for short-term care, and CBP does not routinely administer pregnancy tests and generally did not have on-site medical personnel. According to CBP officials, they typically refer individuals to local medical providers in their area, as appropriate and for all emergent or serious issues—including concerns presented by pregnant women. In addition, if CBP needed to provide a pregnancy test to a woman in its custody, it would take the woman to an offsite medical provider. Our analyses of available data indicate that CBP took pregnant women for a hospital visit or admission at least 168 times from 2015 through 2018. See table 6 for additional information. Ninety-nine percent of these hospital trips involved Border Patrol, while the remaining 4 percent involved OFO. Although CBP generally relies on offsite care for pregnant women, CBP established some on-site medical care and has efforts underway to enhance its medical support at additional Border Patrol facilities and OFO ports of entry. Specifically, one port of entry and three Border Patrol facilities established on-site medical care in 2013 and 2015, respectively. CBP officials at one of these locations told us that they developed on-site medical care based on the volume of crossings, as well as the operational costs for transporting individuals to offsite medical facilities and performing hospital watches. Subsequently, CBP’s January 2019 memo regarding enhanced medical efforts at CBP facilities included efforts to expand medical support. According to a senior CBP official, the agency had staffed more than 40 Border Patrol facilities and OFO ports of entry along the southwest border with on-site contracted medical care, as of January 2020. According to CBP officials, contracted medical staff provide enhanced medical support through initial health intake interviews, medical assessments, diagnosis, treatment, referral, and follow up for persons in custody, including pregnant women. CBP officials stated that they will continue to rely on offsite care to provide emergency or advanced care. Over 100 Complaints Were Filed about ICE and CBP’s Care of Pregnant Women DHS has various processes to obtain and address the hundreds of medical care complaints it receives annually. Specifically, an individual can file a complaint directly to facilities, ICE, CBP, and other DHS entities, including the Office of Inspector General and Office for Civil Rights and Civil Liberties (CRCL). We identified 107 unique complaints that detainees, family members, NGOs, or other parties submitted to various entities from January 2015 through April 2019—54 that involved ICE’s care of pregnant women, 50 that involved CBP, and 3 that involved both. As shown in figure 2, some of these complaints were under investigation as of August 2019, and some were substantiated; however, in most cases there was not enough information for the investigating agency to determine if proper care had been provided, among other things. Regarding the complaints against ICE, the most common type was that ICE allegedly did not provide medical care, or that the medical care was not quality or timely. See appendix VIII for additional information about the number and types of complaints submitted. Eleven of the 54 complaints against ICE remained open as part of an on- going investigation, while the remaining 43 were closed. Of the 43 complaints that were closed: An investigation substantiated one complaint that prenatal vitamins had not been provided at an IHSC-staffed facility. In response, ICE reported taking actions to address the complaint. Investigations partially substantiated one complaint regarding delays in medical care being provided. According to ICE, the delays had resulted from the time required to get medication approved. In response to the complaint, ICE reported coordinating with the facility to address the issues identified. Investigations found that 18 complaints were unsubstantiated. For example, ICE’s review of medical records found that appropriate care had been provided. For the remaining 23 closed complaints, the complaint was not substantiated or unsubstantiated for a variety of reasons. For 11 complaints, the investigating agency determined that it did not have enough information to conduct an investigation, or the agency investigated the complaint but did not have enough information to establish whether the complaint was substantiated or unsubstantiated. For example, the allegation did not contain detailed biographical information, medical records did not contain enough information, or the detainee had been released and the agency could not follow-up. For the remaining 12 complaints, agency documentation did not clearly specify whether the complaint was substantiated or unsubstantiated. Regarding complaints against CBP, the most common type was that pregnant women had allegedly been physically, verbally, or otherwise mistreated. See appendix VIII for additional information about the number and types of complaints submitted. Of the 50 complaints against CBP, four remained open as part of an on- going investigation, while the remaining 46 were closed. Of the 46 complaints that were closed: An investigation substantiated one complaint that a Border Patrol agent violated social media policy by posting a picture and information about a pregnant woman in custody. In response, CBP reported that the employee was suspended for two days. Investigations found that five complaints were unsubstantiated, and one was partially unsubstantiated. For example, an investigation included a review of video footage at a port of entry, among other things, and found that excessive force had not been used. Eight complaints described an event that occurred, such as a miscarriage, but the complaint did not allege that mistreatment or improper care occurred. For the remaining 31 closed complaints, the complaint was not substantiated or unsubstantiated—for a variety of reasons. For 10 complaints, the investigating agency determined that it did not have enough information to conduct an investigation, or the agency investigated the complaint but did not have enough information to establish whether the complaint was substantiated or unsubstantiated. For the remaining 21 complaints, agency documentation did not clearly specify whether the complaint was substantiated or unsubstantiated. With regard to the three complaints that involved allegations against both ICE and CBP, one remained open as part of an on-going investigation, while the other two complaints were found to be unsubstantiated. Agency Comments We provided a draft of this report to DHS for review and comment. DHS provided comments, which are reproduced in appendix IX. DHS also provided technical comments, which we incorporated as appropriate. In addition, we provided relevant excerpts of the report to American College of Obstetricians and Gynecologists, American Correctional Association, and National Commission on Correctional Health Care for review. Officials from these entities provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Acting Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix X. Appendix I: Methodology for Analyses of Data, Inspections, and Complaints This appendix provides additional details on selected methodologies used to address our questions. Specifically, this includes information on our analyses of U.S. Immigration and Customs Enforcement (ICE) data and inspection findings and Department of Homeland Security (DHS) complaints used to address these questions: 1. What do available data indicate about pregnant women detained or held in DHS facilities? 2. What policies and standards does DHS have to address the care of pregnant women, and to what extent are they applicable across all facilities? 3. What is known about the care provided to pregnant women in DHS facilities? Analyses of ICE Data To address our first and third objectives, and provide context for our second objective, we reviewed data sources that ICE uses to track pregnant women in detention from calendar years 2016 through 2018 and matched these data with various ICE databases. We selected these years since ICE first collected data on all pregnant women beginning in June 2015, and 2018 was the last full year of available data for our audit. Specifically, we matched ICE Health Service Corps (IHSC) records for pregnant women detained during calendar years 2016 through 2018 with individual-level detention dataset the ICE Integrated Decision Support (IIDS) database to determine the total number of detentions of pregnant women, as well as the length of detention, facility location, case category status, arresting agency, gestation of pregnancy, when the pregnancy test was conducted, and whether there is an associated criminal conviction (criminality). To conduct our analyses, we matched pregnancy data to the IIDS detention data using alien number and excluded additional records we were unable to match. Because individuals may have multiple detentions, we compared the admission or book-in date from each data source with the book-in dates from the IIDS detention data, and excluded additional records with dates more than 30 days apart. ICE collected data for 1,437 pregnant detainees in 2016; 1,170 in 2017; and 2,126 in 2018. We excluded 60 of the unique pregnant detainee records for 2016; 20 for 2017; and 32 for 2018 because we were unable to match these records to the IIDS individual-level detention data using alien number and book-in date combinations. According to ICE officials, this may be due to data entry errors. As a result, our analyses are based on over 4,600 detainee records we were able to match: 1,377 for 2016; 1,150 for 2017; and 2,094 for 2018. In general, this was our study population, unless otherwise noted in the report. We also merged the detention data with data from ICE’s weekly facility list report, as of February 2019, to determine who owned and operated the facility, whether it was staffed by IHSC officials, and in what state the facility was located. Further, we merged additional IHSC data with our study population to determine the number of obstetrician-gynecologist referrals and numbers that were assigned certain special needs, such as a special diet, lower bunk, no heavy lifting, and limitations on the use of restraints. We also obtained and analyzed data from ICE’s Segregation Review Management System to determine if any of the pregnant women had been segregated. Finally, we analyzed ICE IHSC data on pregnancy outcomes—abortions, births, stillbirths, and miscarriages. These women who experienced such outcomes while detained may include the same women reported in our study population of more than 4,600 pregnant women detentions from calendar years 2016 through 2018, as well as pregnant women detained in calendar year 2015 and January through June 2019. We did not merge the outcome data with our other data sets, but were able to confirm that most of the outcomes were associated with alien numbers from the over 4,600 detentions in our study population. We assessed the reliability of the data used in each of our analyses by analyzing available documentation, such as related data dictionaries; interviewing ICE officials knowledgeable about the data; conducting electronic tests to identify missing data, anomalies, or potentially erroneous values; and following up with officials, as appropriate. We determined the data were sufficiently reliable for describing general information on pregnant women detained by ICE, as well as the care provided to them. Analyses of ICE Inspection Results To address our third objective, we analyzed reports and data from five ICE inspections that address compliance with pregnancy-related policies and detention standards from 2015 through July 2019—the most recent information available at the time of our review. We selected these inspections because they review some aspect of the care provided to pregnant women. Table 7 provides additional information on these inspections. As noted in the table, two of these inspections contained pregnancy- related performance measures. The remaining three inspections assess compliance and identified findings related to the care of pregnant women, but did not have specific performance measures. For the three inspections that did not contain performance measures, we categorized the nature of each finding, such as a recommendation to provide additional medical care. We developed these categories based on a content analysis of the inspection findings, which involved one analyst categorizing the finding and a second person verifying the categories. If there were differences in analyses, these were reconciled through discussion between the two analysts and a final determination of the appropriate category was made. We also analyzed ICE documentation on corrective actions that facilities reported taking to address inspection findings, and used ICE facility data to determine who provided medical care at these facilities. To determine the scope and any limitations of inspection reports and data, we spoke with agency officials responsible for managing these inspections and the data systems used for documenting results. We also reviewed relevant documentation, such as data dictionaries and inspection worksheets. We determined that these data were sufficiently reliable for our purposes of describing the results of inspections regarding the care of pregnant women in ICE custody. Analyses of Complaints We reviewed and categorized complaints that detainees, family members, non-governmental organizations, or other parties submitted to various entities from January 2015 through April 2019—the latest available complaints at the time of our review—regarding ICE and CBP’s care of pregnant women. Specifically, we reviewed complaint data from DHS’s Office for Civil Rights and Civil Liberties (CRCL), DHS’s Office of Inspector General, and IHSC. We selected these complaint systems because, according to DHS officials, they contained relevant information on the care of pregnant women, could be queried in an electronic format, and minimized duplicate complaints across systems. We categorized each complaint based on a content analysis of the complaint narrative, which involved one analyst categorizing the complaint and a second person verifying the category. If there were differences in analyses, these were reconciled through discussion between the two analysts and a final determination of the appropriate category was made. We developed categories for 10 pregnancy outcomes, including births or miscarriages at a DHS facility or hospital, as well as 20 categories to describe the nature of the concerns, including physical mistreatment, use of restraints, or medical care not provided. The total number of concerns identified in our analysis exceeds the number of unique complaints filed because each unique complaint may identify more than one area of concern. We also used ICE facility data to determine, for example, who provides medical care at the facilities where the alleged events occurred. In addition, we analyzed agency documentation on the extent to which complaints could be substantiated, and any corrective actions that agencies and facilities reported taking to address complaints. To determine the scope and any limitations of the complaint information we received, we spoke with agency officials responsible for managing these complaint processes and the data systems used for documenting results. We also reviewed relevant documentation, such as user manuals for complaint systems. Appendix II: Initial Detention Facility and Detention Days for Pregnant Women in U.S. Immigration and Customs Enforcement Facilities This appendix provides additional details on our analyses of U.S. Immigration and Customs Enforcement (ICE) data from calendar years 2016 through 2018 on (a) where pregnant women were initially detained and (b) facilities that had the largest number of detention days involving pregnant women. In particular, these analyses describe whether the facility has ICE Health Service Corps (IHSC) staff and who owns and operates the facility, based on contracts or agreements. Initial detention facility. Our analyses of ICE data found that of the over 4,600 detentions of pregnant women, in regards to IHSC presence, almost 78 percent of detentions of pregnant women were initially detained at an IHSC-staffed facility. Further, 51 percent were at service processing centers that are owned and primarily operated by ICE, all of which were also staffed by IHSC, as shown in table 8. According to ICE officials, many pregnant women first learn about their pregnancy when a test is performed during their intake into a detention facility. Although pregnant women were initially detained in various facility types—based on IHSC presence and who owns and operates the facility, most occurred in eight specific detention facilities located in three states. Specifically, of ICE’s over 4,600 pregnant women detentions from calendar year 2016 through 2018, 86 percent were initially detained in one of eight of these detention facilities—with one facility having 45 percent of the intakes of pregnant women. Facilities with the most number of detention days. For these over 4,600 detentions of pregnant women, ICE detained them for a total of almost 50,300 days from calendar year 2016 through 2018. Our analyses of ICE data found that of the 50,300 detention days of pregnant women, in regards to IHSC presence, 66 percent of these days were at an IHSC-staffed facility. Further, over half were at intergovernmental service agreement facilities—including family residential centers, as shown in table 9. Some facilities may have a large number of detention days associated with the intake of pregnant women, but these facilities may not detain women for a long period of time before releasing or transferring them. For example, at a facility that had one of the largest number of detention days for pregnant women, officials stated that they generally release women once the pregnancy is confirmed. Although pregnant women spent their detention days in various facility types—based on IHSC presence and who owns and operates the facility, most occurred in 19 specific detention facilities located in seven states. Specifically, of those days that pregnant women were detained by ICE, 89 percent of these days were in one of these 19 detention facilities. Appendix III: U.S. Immigration and Customs Enforcement Policies on Care for Pregnant Women U.S. Immigration and Customs Enforcement (ICE) detention facilities and staff are subject to a variety of policies, including ICE-wide policy directives and memoranda, ICE Health Service Corps (IHSC) policies, and detention standards, as of December 2019. We categorized and summarized these policies and standards, as shown below. ICE-wide Policies ICE-wide policies are directed at ICE staff and officers, and not to contractors or facility staff. The following ICE policies address pregnant detainees and ICE supervision of pregnant detainees: ICE Directive 11032.3: Identification and Monitoring of Pregnant Detainees (2017) ICE Directive 11065.1: Review of the Use of Segregation for ICE Detainees (2013) ICE Directive 11002.1: Parole of Arriving Aliens found to Have a Credible Fear of Persecution or Torture (2010) ICE Memorandum: Use of GPS Monitoring Devices on Persons who are Pregnant or Diagnosed with a Severe Medical Condition (2009) ICE ERO Policy 11155.1: Use of Restraints (2012) Enforcement and Removal Operations National Detainee Handbook (2016) These ICE-wide policies do not apply to contract or facility staff unless ICE modified the facility’s contract or if these are already included in the facility’s detention standards to which they are obligated. However, the National Detainee Handbook is a resource for detainees at detention facilities operating under ICE detention standards, excluding family residential centers. We categorized these policies and summarized them accordingly. Intake health screening inquiries about pregnancy. The policy refers to ICE’s responsibility to monitor detention facilities and ensure they meet national detention standard requirements to provide all newly admitted detainees an initial medical screening including pregnancy screening. ICE Directive 11032.3: Identification and Monitoring of Pregnant Detainees (2017) Provision of prenatal care. ICE supervisory staff have responsibilities to ensure that pregnant detainees receive appropriate medical care, including transfer to a different facility if necessary. ICE medical staff also have a responsibility to monitor the condition of pregnant detainees and communicate any concerns to supervisory staff. ICE Directive 11032.3: Identification and Monitoring of Pregnant Detainees (2017) Enforcement and Removal Operations National Detainee Handbook (2016) Segregation of pregnant women. ICE has a responsibility to monitor the use of segregation at detention facilities to ensure that they are adhering to detention standards. ICE Directive 11065.1: Review of the Use of Segregation for ICE Detainees (2013) Use of restraints on pregnant women. Officers should take reasonable precautions to avoid causing discomfort when transporting a restrained detainee. At processing sites or non-ICE detention facilities, ICE personnel shall follow local policies and procedures. ICE ERO Policy 11155.1: Use of Restraints (2012) Record keeping on pregnant women actions. ICE supervisors should ensure that ICE staff and contracted medical staff have processes to notify them of the arrival of a pregnant woman to a detention facility and ensure staff and facilities are aware of their obligations regarding pregnant detainees. IHSC staff are responsible for monitoring the condition of pregnant women while detained, as well as maintaining their medical records. Any instance of segregation of a pregnant woman must be documented in writing. ICE Directive 11032.3: Identification and Monitoring of Pregnant Detainees (2017) ICE Directive 11065.1: Review of the Use of Segregation for ICE Detainees (2013) IHSC-wide Policies IHSC policies are directed specifically toward IHSC staff at detention facilities where IHSC provides medical services. The following IHSC policies address pregnant detainees: ICE Directive 11772.2: Women’s Health Services (2017) ICE Directive 11741.4: Health Assessment (2016) ICE Directive 11742.2: Pre-Screening (2015) ICE Directive 11744.2: Intake Screening and Intake Reviews (2016) We categorized these policies and summarized them accordingly. Intake health screening inquiries about pregnancy. Intake screening includes pregnancy testing of women 10 to 56 years of age as well as questioning of pregnancy status. ICE Directive 11772.2: Women’s Health Services (2017) ICE Directive 11742.2: Pre-Screening (2015) ICE Directive 11744.2: Intake Screening and Intake Reviews (2016) Pregnancy testing at intake. Intake screening includes pregnancy testing of women 10 to 56 years of age and inquiry of reproductive health including previous pregnancies. ICE Directive 11772.2: Women’s Health Services (2017) ICE Directive 11741.4: Health Assessment (2016) ICE Directive 11744.2: Intake Screening and Intake Reviews (2016) Access to abortion. In the event of a threat to a woman’s life from carrying a pregnancy to term, or else in cases of rape or incest, ICE must bear the cost of a detainee’s decision to terminate a pregnancy; otherwise the woman must bear the cost. ICE should offer medical resources to support effective recovery and follow-up care. ICE Directive 11772.2: Women’s Health Services (2017) Provision of prenatal care. Pregnant women should be seen by medical providers at least once a month while detained. They should also be referred to an obstetric specialist, and their medical records shared with the specialist to facilitate care. ICE Directive 11772.2: Women’s Health Services (2017) ICE Directive 11741.4: Health Assessment (2016) ICE Directive 11744.2: Intake Screening and Intake Reviews (2016) Provision of postnatal care. A postpartum detainee must receive postnatal care from a medical provider, in consultation with an obstetric specialist, at least once a month. ICE Directive 11772.2: Women’s Health Services (2017) Mental health services and counseling for pregnant women. Any female detainee who gave birth, miscarried, or terminated a pregnancy within the last 30 days must receive a mental health evaluation, with the evaluation to occur no later than 72 hours after initial referral. ICE Directive 11772.2: Women’s Health Services (2017) Care for pregnant women with substance use disorder. Chemically dependent pregnant women are considered high-risk and should be referred to an obstetrician or other appropriate medical provider as soon as they are identified. ICE Directive 11772.2: Women’s Health Service (2017) ICE Directive 11744.2: Intake Screening and Intake Reviews (2016) Use of restraints on pregnant women. Pregnant detainees or those in postdelivery recuperation should not be restrained except in extraordinary circumstances that are documented by a supervisor or directed by a medical authority, whether in an ICE detention facility, in transport, or at a medical facility. Detainees in active labor or delivery can never be restrained. Even if restraints are used, a pregnant woman should never be restrained face down or on her back, or restrained with a belt that constricts the abdomen or pelvis. ICE Directive 11772.2: Women’s Health Service (2017) Record keeping on pregnant women actions. Intake screenings and assessments including pregnancy test results must be documented, as are risk factors for high risk pregnancies. Any use of restraints or request for abortion services must be documented. ICE supervisory staff must be notified within 72 hours of the arrival at a detention facility of a pregnant woman. ICE Directive 11772.2: Women’s Health Service (2017) ICE Directive 11741.4: Health Assessment (2016) ICE Directive 11744.2: Intake Screening and Intake Reviews (2016) ICE Detention Standards Entities that have a contract or agreement with ICE to hold immigration detainees are generally contractually obligated to one of four sets of detention standards. These standards address a range of our pregnancy- related categories of care and vary by standard. 2000 ICE National Detention Standards (NDS) 2007 ICE Family Residential Standards (FRS) 2008 ICE Performance-Based National Detention Standards 2008 (2008 PBNDS) 2011 ICE Performance-Based National Detention Standards 2011 (2011 PBNDS) We categorized these standards and summarized them accordingly. The 2011 PBNDS standards received revision in 2016. Whether a 2011 PBNDS facility is contractually required to adhere to the 2016 revision is dependent upon the contract language negotiated in each agreement. Where appropriate, the summaries below note changes to policy as a result of those revisions. Intake health screening inquiries about pregnancy. 2008 PBNDS: Initial screening should be done within 12 hours of arrival and should inquire about the possibility of pregnancy. 2011 PBNDS: Initial screening should be done within 12 hours of arrival and should inquire about the possibility of pregnancy. In the 2016 revisions, the evaluation also includes a pregnancy test for women aged 18 to 56. Pregnancy testing at intake. 2008 PBNDS: Initial screening should be done within 12 hours of arrival and should inquire about the possibility of pregnancy. 2011 PBNDS: In the 2016 revisions, initial screening includes pregnancy testing of women 18 to 56. Access to abortion. 2011 PBNDS: If the life of the mother is endangered by carrying the fetus to term, or in the case of rape or incest, ICE will assume the costs to terminate the pregnancy. ICE shall arrange the transportation for the medical appointment, and to counseling services if requested in all cases, including those where rape, incest, or risk to life do not apply. Every facility, either directly or via contractor, must provide female detainees with access to counseling for pregnancy planning if the detainee wishes to receive an abortion. Provision of prenatal care. FRS: Female residents will have access to pregnancy management services including routine prenatal care 2008 PBNDS: Female detainees will have access to pregnancy management services including routine prenatal care 2011 PBNDS: Pregnant detainees will have access to pregnancy management services including routine prenatal care. They will also receive access to a specialist and receive a health assessment. The 2016 revisions note those actions should occur as soon as appropriate or within two working days. The 2016 revisions also give the medical provider authority to identify pregnant detainees’ special needs such as diet or housing requirements and inform all necessary staff and authorities. Provision of postnatal care. FRS: Female residents will have access to pregnancy management services including postpartum follow-up care. 2008 PBNDS: Female detainees will have access to pregnancy management services including postpartum follow-up care. 2011 PBNDS: Pregnant detainees will have access to pregnancy management services including postpartum follow-up care. After giving birth, receiving an abortion or miscarrying, mental health assessments should also be offered. Provision of perinatal/labor care. 2011 PBNDS: Pregnant detainees will have access to specialized care including labor and delivery. Mental health services and counseling for pregnant women. FRS: Pregnant females will have access to pregnancy management services that include counseling and assistance. 2008 PBNDS: Pregnant females will have access to pregnancy management services that include counseling and assistance. 2011 PBNDS: Pregnant detainees will have access to care including counseling and assistance. Detainees can also request transportation to religious, medical and social counseling when considering termination of a pregnancy. In 2016 revisions, intake screening should include education to female detainees about mental health services related to pregnancy and women’s health. Care for pregnant women with substance use disorder. 2008 PBNDS: Female detainees will have access to pregnancy management services that include addiction management. 2011 PBNDS: In 2016 revisions, all chemically dependent pregnant detainees are to be considered high risk and referred to an obstetrician or other provider capable of addressing their needs immediately. HIV care for pregnant women. 2011 PBNDS: Medical personnel shall provide all detainees diagnosed with HIV/AIDS medical care consistent with national recommendations and guidelines disseminated through the U.S. Department of Health and Human Services, the Center for Disease Control, and the Infectious Diseases Society of America. Prenatal vitamins. 2011 PBNDS: Pregnant detainees will have access to prenatal care including prenatal vitamins. Nutrition for pregnant women. NDS: Physicians may order snacks or supplemental feedings to increase protein or calories for reasons including pregnancy. In hold rooms, pregnant women should have regular access to snacks, milk, and juice. FRS: Physicians may order snacks or supplemental feedings to increase protein or calories for reasons including pregnancy. Pregnant women will have access to pregnancy management services that include nutrition. 2008 PBNDS: Physicians may order snacks or supplemental feedings to increase protein or calories for reasons including pregnancy. In hold rooms, pregnant women should have regular access to snacks, milk, and juice. Pregnant women will have access to pregnancy management services that include nutrition. 2011 PBNDS: Physicians may order snacks or supplemental feedings to increase protein or calories for reasons including pregnancy. In hold rooms, pregnant women should have regular access to snacks, milk, and juice. Pregnant women will have access to pregnancy management services that include nutrition. Special consideration is given to pregnant women when providing meals and snacks during transportation. In the 2016 revisions, the medical provider is responsible for identifying special needs of pregnant detainees, including diet, and notifying all necessary staff. Special accommodations for pregnant women. 2008 PBNDS: In hold rooms, pregnant women will have access to temperature appropriate clothing and blankets and may, depending on facility, have access to bunks, cots, or beds, normally not kept in hold rooms. 2011 PBNDS: In hold rooms, pregnant women will have access to temperature appropriate clothing and blankets and may, depending on facility, have access to bunks, cots, or beds, normally not kept in hold rooms. Pregnant detainees should also have access to lactation services in the facility. In the 2016 revisions, the medical provider is responsible for identifying special needs of pregnant detainees and notifying all necessary staff. Segregation of pregnant women. 2011 PBNDS: In the 2016 revisions, it is stated that women who are pregnant, post-partum, recently had a miscarriage, or recently had a terminated pregnancy should as a general matter not be placed in a Special Management Unit. In very rare situations, a woman who is pregnant, postpartum, recently had a miscarriage, or recently had a terminated pregnancy may be placed in a Special Management Unit as a response to behavior that poses a serious and immediate risk of physical harm, or if the detainee has requested to be placed in protective custody administrative segregation and there are no more appropriate alternatives available. Also in the 2016 revisions, a facility administrator must notify the appropriate field office director in writing as soon as possible, but no later than 72 hours any time a pregnant woman or one who recently had a miscarriage is placed in segregation. In all cases, in the 2016 revisions, this decision must be approved by a representative of the detention facility administration, in consultation with a medical professional, and must be reviewed every 48 hours. Use of restraints on pregnant women. NDS: Pregnant detainees should be given special consideration if restrained as a result of a physical encounter. A medical professional should be consulted immediately in the aftermath, and the detainee examined. Pregnant detainees should be restrained in such a way as to avoid harming the fetus such as not restraining face down. FRS: Medical staff will advise on the necessary precautions to take when restraining a pregnant detainee and restraint should be done only when other methods have been tried or are impracticable. 2008 PBNDS: Medical staff will advise on the necessary precautions to take when restraining a pregnant detainee. Pregnant detainees should be restrained in such a way as to avoid harming the fetus such as not restraining face down. 2011 PBNDS: A pregnant detainee is not to be restrained except in truly extraordinary circumstances. Even then, it must be documented by a supervisor and directed by a medical authority. Women in active labor or delivery can never be restrained, and if restrained, the detainee should never be face down, on her back, or restrained with a belt that constricts the area of pregnancy. Record keeping on pregnant women actions. NDS: The medical provider of a facility will notify the ICE officer in charge whenever a pregnant detainee is identified and any use of force or application of restraints on a detainee should be followed by a medical examination, and its results documented. FRS: The medical provider of a facility will notify the ICE facility administrator whenever a pregnant detainee is identified. A treatment plan should be developed for any detainee requiring close medical supervision, and approved by the appropriate physician or other medical provider. 2011 PBNDS: When a detainee is pregnant, an alert is notified in their medical record and the facility administrator will receive notice. If a detainee is transferred, it is the administrator’s responsibility to inform ICE of the medical alert. Any use of restraints requires documented approval, including in the detainee’s detention and medical files and guidance from the on-site medical authority. A request to terminate a pregnancy must be documented in the medical file and signed by the detainee. In the 2016 revisions, ICE supervisory staff must be informed within 72 hours when a pregnant detainee is identified. Appendix IV: Recommended Guidance on the Care of Pregnant Women Detainees Numerous professional associations, non-governmental organizations, and federal agencies have issued guidance on care to be provided to pregnant women. Specifically, we reviewed the following guidance: American Civil Liberties Union: Worse than Second-Class: Solitary Confinement of Women in the United States (2014) American College of Obstetricians and Gynecologists: Committee Opinion: Health Care for Pregnant and Postpartum Incarcerated Women and Adolescent Females (2016) Guidelines for Perinatal Care, Eighth Edition (2017) American Correctional Association Performance-Based Standards and Expected Practices for Adult Correctional Institution, 5th Edition Joint Public Correctional Policy on the Treatment of Opioid Use Disorders for Justice Involved Individuals (2018) Joint Statement on the Federal Role in Restricting the Use of Restraints on Incarcerated Women and Girls during Pregnancy, Labor, and Postpartum Recovery National Commission on Correctional Health Care (NCCHC): Position Statement: Restraint of Pregnant Inmates (2015) Position Statement on Solitary Confinement (Isolation) (2016) Position Statement on Breastfeeding in Correctional Settings (2018) Standards for Health Services in Jails (2018) Sufrin C., Pregnancy and Postpartum Care in Correctional Settings, National Commission on Correctional Health Care, Clinical Resources Series. (2018) National Women’s Law Center: Women Behind Bars: A state-by-state report card and analysis of federal policies on conditions of confinement for pregnant and parenting women and the effect on their children (2010) United Nations Rules for the Treatment of Women Prisoners and Non- custodial Measures for Women Offenders (the Bangkok Rules) (2010) U.S. Department of Homeland Security (DHS): Report of the DHS Advisory Committee on Family Residential Centers (2016) U.S. Department of Justice, Bureau of Justice Assistance: Best Practices in the Use of Restraints with Pregnant Women and Girls Under Correctional Custody (2014) U.S Department of Justice Report and Recommendations Concerning the Use of Restrictive Housing (2016) Because the specificity of the guidance varies across entities, we summarized the recommended guidance for our report purposes. For example, guidance on nutrition may range from calling for additional meals for pregnant women to more specifically outlining extra caloric and dietary needs. Our summary statement for each of the pregnancy-related topics is included below, along with examples from relevant recommended guidance. Intake health screening inquiries about pregnancy. Summary of recommended guidance: The sources that have guidance generally agree that intake health screenings should include inquiry regarding pregnancy and related conditions. Example: “Screening is performed on all inmates upon arrival at the intake facility…The receiving screening form…inquires as to the inmate’s…possible, current, or recent pregnancy…” – NCCHC Standards for Health Services in Jails (2018) Pregnancy testing at intake. Summary of recommended guidance: Sources that have guidance generally agree that pregnancy testing should be conducted on newly detained women of childbearing age, but some provide additional guidance on when this should be done, and this may vary. Example: “All women at risk for pregnancy should be offered a pregnancy test within 48 hours of admission…A simple approach would be to offer pregnancy testing to all women under the age of 55.” – Pregnancy and Postpartum Care in Correctional Settings (2018) Example: “…medical providers should continue to offer pregnancy tests to every female of child-bearing age who is newly detained…” – Report of the DHS Advisory Committee on Family Residential Centers (2016) Access to abortion. Summary of recommended guidance: Sources that have guidance generally agree abortion services should be offered to detained pregnant women, with one source providing additional details, including swift facilitation of a woman’s choice of termination and non- interference of outside bodies in the decision. Example: “Pregnancy termination is generally to be performed as safely and as early in pregnancy as possible…Termination of pregnancy should not depend on whether or not the specific procedure is available on site. Each woman will decide what option to choose…this decision is to be made without undue interference by outside bodies, including governmental bodies.” – Report of the DHS Advisory Committee on Family Residential Centers (2016) Provision of prenatal care. Summary of recommended guidance: Sources that have guidance generally agree that some form of prenatal care should be provided to detained pregnant women, but differ on the level of specificity for the standard of care, from stating simply that prenatal care be provided to specifying requirements including regularly scheduled obstetric care and access to 24-hour emergency care. Example: “Incarcerated women who wish to continue their pregnancies should have access to readily available and regularly scheduled obstetric care, beginning in early pregnancy and continuing through the postpartum period. Incarcerated pregnant women also should have access to unscheduled or emergency obstetric visits on a 24-hour basis.” – American College of Obstetricians and Gynecologists Committee Opinion: Health Care for Pregnant and Postpartum Incarcerated Women and Adolescent Females (2016) Example: “Prenatal care in correctional facilities must reflect national standards, including visit frequency with a qualified prenatal care provider, screening and diagnostic tests, and referrals for complications.” – Pregnancy and Postpartum Care in Correctional Settings (2018) Provision of postnatal care. Summary of recommended guidance: Sources that have guidance generally agree that the provision of postnatal care be provided to women who give birth. However, they vary in their specifics. For example, some specifically state that lactation service or postnatal birth control should be provided. One source also recommends specific forms of accommodation to aid postnatal recovery. Example: “…appropriate accommodations should be made, such as allowing women to rest when needed…Discharge instructions from the hospital, which may include postpartum blood pressure monitoring or diabetes screening, should be adhered to.” – Pregnancy and Postpartum Care in Correctional Settings (2018) Example: “Allow immediately postpartum women to breastfeed their babies and have lactation support services from the hospital.” – NCCHC Position Statement on Breastfeeding in Correctional Settings (2018) Provision of perinatal/labor care. Summary of recommended guidance: Sources that have guidance generally agree a pregnant woman should be transported to a hospital if there are signs of labor. Some sources state that detention staff be trained in emergency delivery in the event of a delivery occurring in the facility, away from professional care. Example: “Due to the time necessary to arrange transport to a nearby hospital, there is a low threshold to send pregnant inmates out for evaluation of a labor when signs or symptoms of labor or ruptured membranes are present… Any facility that houses pregnant women should have an emergency delivery kit available on-site, and health staff should be trained in its use in the event that a delivery occurs in the facility.” – Pregnancy and Postpartum Care in Correctional Settings (2018) Example: “Having a preexisting arrangement to have the babies of incarcerated women delivered at a local hospital reduces confusion and uncertainty when a woman goes into labor.” – National Women’s Law Center Women Behind Bars: A state-by-state report card and analysis of federal policies on conditions of confinement for pregnant and parenting women and the effect on their children (2010) Mental health services and counseling for pregnant women. Summary of recommended guidance: Sources that have guidance generally agree that pregnant and postpartum women should have access to mental health/counseling services. Example: “Pregnant inmates are given comprehensive counseling and care in accordance with national standards and their expressed desires regarding their pregnancy.” – NCCHC Standards for Health Services in Jails (2018) Care for pregnant women with substance use disorder. Summary of recommended guidance: Sources that have guidance generally agree that addicted pregnant women should have access to screening and specialized addiction-treatment programs. Example: “Screening for drug and alcohol use is a first step and is followed with referral to treatment. For women who report opiate use, the standard of care is not to detoxify from opiates during pregnancy due to the fetal risks of withdrawal. Rather the standard of care is to provide…methadone or buprenorphine…” – Pregnancy and Postpartum Care in Correctional Settings (2018) Example: “The standard of care for pregnant women with [opioid use disorder] is and should therefore be offered/continued for all pregnant detainees and incarcerated individuals.” – Joint Public Correctional Policy on the Treatment of Opioid Use (2018) HIV care for pregnant women. Summary of recommended guidance: Sources that have guidance generally agree that pregnant women should have access to testing and treatment of HIV for the benefit of both the mother and child. Example: “The Centers for Disease Control and Prevention recommends universal opt-out HIV screening for pregnant women; with early detection, prevention of mother-to-child transmission can be accomplished…” – Pregnancy and Postpartum Care in Correctional Settings (2018) Vaccinations for pregnant women. Summary of recommended guidance: Sources that have guidance generally agree that vaccines recommended for pregnant women be provided to detainees in accordance with accepted medical guidelines. Example: “Current recommendations are that all pregnant women should be vaccinated with the flu vaccine during flu season and tetanus, diphtheria, and pertussis during the third trimester, regardless of whether they were vaccinated outside of pregnancy.” – NCCHC Standards for Health Services in Jails (2018) Example: “Vaccines related to pregnancy should be offered pursuant to CDC guidelines…” – Report of the DHS Advisory Committee on Family Residential Centers (2016) Prenatal vitamins. Summary of recommended guidance: The sources that have guidance generally agree that prenatal vitamins should be provided to pregnant women, and some sources state that prenatal vitamins should be provided to breastfeeding women. Example: “Pregnant women must also receive prenatal vitamins that contain, among other essential vitamins and minerals, 400mcg to 800mcg of folic acid... Women with documented anemia (hemoglobin<11) should receive additional iron supplementation.” – Pregnancy and Postpartum Care in Correctional Settings (2018) Example: “Appropriate nutrition and prenatal vitamins should be given to lactating women…” – NCCHC Standards for Health Services in Jails (2018) Nutrition for pregnant women. Summary of recommended guidance: Sources that have guidance generally recommend special nutrition regimens for pregnant women, with varying degrees of specificity, ranging from recommending the use of supplements broadly to specifying required nutrients such as folic acid and calcium and extra calories in the form of additional meals, larger meals, or food between meals, and in some cases specifying that these requirements also apply for postpartum women. Example: “Pregnant and postpartum women have additional nutritional needs and should be counseled on the importance of adequate nutrition. Diets provided by correctional institutions should be specialized to the women’s needs and be rich in whole grains, calcium, and fruits and vegetables. In the second and third trimesters, women require an additional 300 calories per day…” – Pregnancy and Postpartum Care in Correctional Settings (2018) Special accommodations for pregnant women. Summary of recommended guidance: Sources that have guidance generally agree that accommodations should be provided to pregnant women. Some sources specify accommodations such as appropriate programming and hygiene for pregnant women and nursing mothers, appropriately adjusted work assignments and exercise, and bottom bunks. Example: “Activity for pregnant women must take into account the physical constraints of being in a correctional facility. All pregnant women must have a bottom bunk so that they do not risk falling from a top bunk. Certain work assignments may be inappropriate…Work assignments should be adjusted accordingly. In the absence of medical or obstetric complications, 30 minutes or more of moderate exercise a day on most, if not all, days of the week is recommended.” – Pregnancy and Postpartum Care in Correctional Settings (2018) Segregation of Pregnant Women. Summary of recommended guidance: Sources that have guidance generally agree that pregnant women should not be placed in segregation, though some suggests this could be necessary in certain cases. Example: “Women who are pregnant, who are postpartum, who recently had a miscarriage, or who recently had a terminated pregnancy should not be placed in restrictive housing…In very rare situations, a woman who is pregnant, is postpartum, recently had a miscarriage, or recently had a terminated pregnancy may be placed in restrictive housing as a temporary response to behavior that poses a serious and immediate risk of physical harm…” – U.S Department of Justice Report and Recommendations Concerning the Use of Restrictive Housing (2017) Use of Restraints on Pregnant Women. Summary of recommended guidance: Sources that have guidance generally agree that restraints generally should not be used on a pregnant woman, except when necessary. Some sources indicate that if restraints are necessary, it should be well documented and require approval and assessment from a senior official and/or medical professional. Some sources specify the types of restraints that should never be used including abdominal restraints, handcuffs behind the back, and leg and ankle restraints. Example: “Restraint of pregnant inmates during labor and delivery should not be used. The application of restraints during all other pre- and postpartum periods should be restricted as much as possible and, when used, done so with consultation from medical staff and in the least restrictive means possible. All uses of restraints in pregnant inmates must be documented and reviewed.” – NCCHC Position Statement: Restraint of Pregnant Inmates (2015) Example: “Policies and procedures on the use of restraints on pregnant women and girls under correctional custody should be developed collaboratively by correctional leaders and medical staff who have knowledge about the potential health risks…The use of restraints on pregnant women and girls under correctional custody should be limited to absolute necessity.” - U.S. Department of Justice, Bureau of Justice Assistance: Best Practices in the Use of Restraints with Pregnant Women and Girls Under Correctional Custody (2014) Record Keeping on Pregnant Women Actions. Summary of recommended guidance: Sources that have guidance generally agree that accurate records of detention regarding pregnant women should be kept, with varying levels of specificity ranging from noting that records should be kept for incidents of restraint to specifying how documentation is kept and reviewed. One source notes that medical records should also be easily accessible for offsite care providers. Example: “If detention continues ICE should ensure…reporting of detention to ICE Headquarters and continued review of the need to detain.” – Report of the DHS Advisory Committee on Family Residential Centers (2016) Example: “Obstetrician-gynecologists and other obstetric care providers of antepartum care should be able to either primarily provide or easily refer to others to provide a wide array of services. These services include… imely transmittal of prenatal records to the site of the woman’s planned delivery so that her records are readily accessible at the time of delivery.” – American College of Obstetricians and Gynecologists Guidelines for Perinatal Care, Eighth Edition (2017) Appendix V: U.S. Customs and Border Protection Policies on Care for Pregnant Women U.S. Customs and Border Protection (CBP) and its components, Border Patrol and the Office of Field Operations (OFO), have several policies and standards that address the care and treatment of pregnant women in their custody. Specifically, these include the following: CBP: National Standards on Transport, Escort, Detention, and Search (2015) OFO: Personal Search Handbook (2004) OFO: Directive: Secure Detention, Transport and Escort Procedures at Ports of Entry, CBP Directive No. 3340-030B (2008) Border Patrol: U.S. Border Patrol Policy: Hold Rooms and Short Term Custody (2008) Summaries of these policies and standards are provided below, along with the titles of the policies or standards on which each summary is based. Processing and holding. Officers and agents will consider pregnancy when expediting processing of vulnerable detained persons and when placing detained persons with others in hold rooms and holding facilities. Secure Detention, Transport and Escort Procedures at Ports of Entry (2008) and U.S. Border Patrol Policy: Hold Rooms and Short Term Custody (2008) Mental health services and counseling for pregnant women. If an agent or officer observes signs of mental illness, it should be reported to a supervisor and appropriate medical care be provided or sought, including calling emergency services in the event of an emergency. Transport, Escort, Detention, and Search (2015) Nutrition for pregnant women. Pregnant detainees should be offered a meal every six hours they are in detention and have access to snacks, milk, or juice at all times. Transport, Escort, Detention, and Search (2015); Secure Detention, Transport and Escort Procedures at Ports of Entry (2008); and U.S. Border Patrol Policy: Hold Rooms and Short Term Custody (2008) Special accommodations for pregnant women. Reasonable accommodations should be made for pregnant women, including placement in the least restrictive appropriate setting. If circumstances permit, pregnant women should not be placed in hold rooms or other secure areas, but instead in an open area under supervision. Transport, Escort, Detention, and Search (2015); Secure Detention, Transport and Escort Procedures at Ports of Entry (2008); and U.S. Border Patrol Policy: Hold Rooms and Short Term Custody (2008) Use of restraints on pregnant women. Officers and agents should not use restraints on pregnant women unless they demonstrate or threaten violence, have a criminal and/or violent history, or there is an articulable escape risk. Even if restraints are used, pregnant detainees are not to be restrained face-down, on their backs, or with a belt that constricts the area of her pregnancy. Pregnant women can never be restrained while in active labor or delivery. All use of restraints must be documented. Transport, Escort, Detention, and Search (2015) Record keeping on pregnant women actions. All physical interactions with pregnant women must be recorded after they occur. Any medical emergency must be recorded as soon as practical after emergency services have been contacted. Further, Border Patrol agents must create a booking record for persons detained and the record must include a medical annotation for conditions requiring care, including pregnancy. Transport, Escort, Detention, and Search (2015) and U.S. Border Patrol Policy: Hold Rooms and Short Term Custody (2008) Appendix VI: U.S. Immigration and Customs Enforcement Inspection Results for Care of Pregnant Women U.S. Immigration and Customs Enforcement (ICE) uses various inspections for accessing facilities’ compliance with policies and detention standards—the frequency and focus of which vary. Some inspections also include pregnancy-related performance measures, such as a measure assessing whether a pregnancy test was performed at intake. We analyzed reports and data from five ICE inspections that address compliance with pregnancy-related policies and detention standards from 2015 through June 2019—the most recent data available at the time of our review. We selected these inspections because they review some aspect of the care provided to pregnant women. These inspections address compliance at ICE detention facilities where on-site medical care is provided by both ICE Health Service Corps (IHSC) as well as other entities (non-IHSC facilities). Pregnancy-related Performance Measures at IHSC-staffed and non- IHSC Facilities We reviewed results from IHSC’s inspections of IHSC-staffed and non- IHSC facilities, which includes pregnancy-related performance measures. We found that instances of non-compliance occurred at 16 facilities subject to a range of detention standards. Three of these facilities were IHSC-staffed, and 13 were non-IHSC. Table 10 shows results from December 2016 through March 2019. Pregnancy-related Performance Measures at IHSC-staffed Facilities We reviewed information on pregnancy-related performance measures reported by facilities staffed by IHSC. Table 11 shows results from fiscal years 2015 through 2018. Although the table shows average annual compliance across all IHSC- staffed facilities, variation exists between facilities, and over time. For example, in fiscal year 2018, one facility improved its performance on the measure of whether prenatal vitamins were prescribed from 33 percent compliance in the first quarter to 100 percent compliance in the second quarter. In addition, in fiscal year 2018, facilities’ compliance with each measure ranged as follows: Obstetrician-gynecologist consult ordered is documented within 7 business days of identification: 50 to 100 percent (average 80 percent) Obstetrician-gynecologist scheduled appointment time documented within 7 business days of identification: 15 to 100 percent (average 75 percent) Detainee education documented at each encounter: 0 to 100 percent (average 79 percent) Records reviewed by provider after obstetrician appointment: 0 to 100 percent (average 79 percent) Appropriate labs ordered if not obtained from obstetrician- gynecologist: 50 to 100 percent (average 79 percent) Deficiencies, Recommendations, and Corrective Actions for ICE Inspections of Pregnancy- related Detention Standards Three additional ICE inspections identified 19 findings at 13 facilities related to the care of pregnant women. All of the findings occurred at non- IHSC facilities. Table 12 provides additional information on the findings and corrective actions that facilities reported taking. Appendix VII: Summary of Interviews with Pregnant Women Regarding Their Care in Department of Homeland Security Custody We interviewed ten pregnant women who were detained at three of the four U.S. Immigration and Customs Enforcement (ICE) facilities we visited, including facilities staffed by ICE Health Service Corps (IHSC- staffed) and non-IHSC facilities. We interviewed an additional four pregnant women at a local shelter in Texas which provides temporary accommodations to those in need of housing after their release from DHS custody. These four women may not have known which agency they had been detained or held by prior to entering the shelter. As a result, their perspectives are listed separately in the table below from the 10 women with whom we spoke at ICE detention facilities. Table 13 summarizes the perspectives of these 14 pregnant women. Although these interviews are not generalizable and may not be indicative of the care provided at all detention facilities, they provided us with perspectives on the care provided to pregnant women. We did not independently verify statements made by these 14 women we interviewed. Appendix VIII: Complaints Regarding U.S. Immigration and Customs Enforcement’s and U.S. Customs and Border Protection’s Care of Pregnant Women We analyzed and categorized complaints that detainees, family members, non-governmental organizations, or other parties submitted to various entities from January 2015 through April 2019 regarding U.S. Immigration and Customs Enforcement’s (ICE) and U.S. Customs and Border Protection’s (CBP) care of pregnant women. Specifically, we reviewed complaints from Department of Homeland Security’s (DHS) Office for Civil Rights and Civil Liberties (CRCL), DHS’s Office of Inspector General, and ICE Health Service Corps (IHSC). We identified a total of 107 complaints—54 regarding ICE, 50 regarding CBP, and three regarding both ICE and CBP. Complaints against ICE We identified 54 unique complaints submitted from January 2015 through April 2019 regarding ICE’s care of pregnant women. Each of the 54 complaints may identify more than one area of concern, and as such we identified 104 concerns. The most common concern was that ICE allegedly did not provide medical care or the medical care was not quality or timely. As previously described in this report, the investigating agency determined that one complaint was substantiated and one complaint was partially substantiated. The remaining complaints were either still open as part of an on-going investigation, unsubstantiated by the investigating agency, or the complaint was not substantiated or unsubstantiated for a variety of reasons. Table 14 provides additional information on the number and types of concerns identified in the 54 complaints regarding ICE’s care of pregnant women. Complaints against CBP We identified 50 unique complaints submitted from January 2015 through April 2019 regarding CBP’s care of pregnant women. Each of the 50 complaints may identify more than one area of concern, and as such we identified 81 concerns. The most common concern was that pregnant women had allegedly been physically, verbally, or otherwise mistreated. As previously described in this report, the investigating agency determined that one complaint was substantiated. The remaining complaints were either still open as part of an on-going investigation, unsubstantiated or partially unsubstantiated by the investigating agency, the complaint was not substantiated or unsubstantiated for a variety of reasons, or the complaint described an event that occurred, such as a miscarriage, but did not allege that mistreatment or improper care occurred. Table 15 provides additional information on the number and types of issues identified in the 50 complaints regarding CBP’s care of pregnant women. Appendix IX: Comments from the Department of Homeland Security Appendix X: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Dawn Locke (Assistant Director), Tracey Cross (Analyst-in-Charge), Hiwotte Amare, David Bieler, Christine Davis, Elizabeth Dretsch, Kelsey Griffiths, Eric Hauswirth, Sasan J. “Jon” Najmi, Sean Sannwaldt, and Adam Vogt made key contributions to this report.
Why GAO Did This Study In December 2017, the Department of Homeland Security (DHS) updated its policy on pregnant women, removing language that stated that pregnant women would generally not be detained except in extraordinary circumstances or as mandated by law. Within DHS, CBP temporarily holds individuals in its facilities and processes them for further action, such as release or transfer to ICE. ICE manages the nation's immigration detention system. ICE utilizes various facility types to detain individuals, such as those owned and operated by ICE and contract facilities. GAO was asked to review issues related to the care of pregnant women in DHS facilities. This report examines (1) what available data indicate about pregnant women detained or held in DHS facilities, (2) DHS policies and standards that address the care of pregnant women, and (3) what is known about the care provided to pregnant women in DHS facilities. GAO analyzed available DHS data and documents from calendar years 2015 through 2019, including detention data, inspection reports and data, and complaints; reviewed policies related to the care of pregnant women; and interviewed agency officials and three national non-governmental organizations. GAO also interviewed a non-generalizable sample of 14 pregnant women detained or released by DHS and five non-governmental organizations in four field locations that had the greatest number of detentions of pregnant women, among other things. What GAO Found GAO's analyses of U.S. Immigration and Customs Enforcement (ICE) and U.S. Customs and Border Protection (CBP) data on pregnant women found: ICE detained pregnant women over 4,600 times from calendar year 2016 through 2018, with more than 90 percent resulting from CBP arrests. Sixty-eight percent of these detentions were for 1 week or less, while 10 percent were for more than 30 days. Seventy-eight percent of these initial detentions occurred at facilities staffed with ICE medical personnel. ICE has policies and detention standards that address a variety of topics regarding the care of pregnant women, such as pregnancy testing requirements, for which non-governmental organizations, professional associations, and federal agencies have issued recommended guidance. However, some facility types—which vary based on who owns, operates, and provides medical care at the facility—did not address all these pregnancy-related topics in their policies and standards, such as prenatal vitamins, as of December 2019. ICE has plans to address the gaps GAO identified in these facility types, including updating some of its policies and detention standards in February 2020. In regards to CBP, its facilities are designed for holding individuals for no more than 72 hours, and therefore are not equipped to provide long-term care. Nonetheless, CBP has some policies and standards regarding pregnant women for its short-term facilities, including those related to nutrition and the circumstances in which restraints could be used. GAO's analyses of inspections and complaint mechanisms offered the following insights into the care provided to pregnant women: ICE inspections found 79 percent or greater compliance with most of its pregnancy-related performance measures. For example, inspections found 91 percent of pregnant woman were seen by an obstetrician-gynecologist within 30 days of pregnancy confirmation, from December 2016 through March 2019. According to ICE officials and agency documentation, ICE has processes in place to address non-compliance. Additional inspections identified pregnancy-related issues at 13 facilities from January 2015 through July 2019. The facilities or ICE have taken actions to address the issues. CBP generally relies on offsite care for pregnant women, and as a result has limited information on care CBP provided. However, CBP has efforts underway to enhance medical support at selected facilities. Over 100 complaints were filed about ICE's and CBP's care of pregnant women from January 2015 through April 2019. Of these complaints, 3 were substantiated or partially substantiated, and 24 were unsubstantiated or partially unsubstantiated. In most cases there was not enough information for the investigating agency to determine whether proper care had been provided.
gao_GAO-20-177
gao_GAO-20-177_0
Background Ballistic Missile Threats Ballistic missiles, which foreign adversaries generally use as a deterrent or instrument of coercion, are becoming increasingly important weapons to support military and political objectives. These weapons continue to proliferate and show advances in mobility, reliability, in-flight maneuverability, accuracy, and ability to reach longer distances. According to the defense intelligence community, there has been a dramatic increase in ballistic missile capabilities over the last decade, and the over 20 countries that already possess ballistic missiles are likely to pursue further expansions in their quantities and capabilities. Figure 1 shows the lineup of operational ballistic missiles from North Korea and Iran, two of the various countries that pose threats to the United States and its allies and are of concern to the BMDS. Ballistic missile threats are generally categorized by their range (i.e., ground distance covered between the launch point and impact of the missile) as shown in figure 2 below. The configuration of a ballistic missile is also largely determined by the range a missile is expected to travel. For example, longer range ballistic missiles typically have two or three distinct sections, known as stages, that separate during flight and each has an independent propulsion system to ensure the warhead reaches its target. Shorter range ballistic missiles generally only have one section, or a single stage, that remains intact until the warhead reaches its intended target and detonates. Ballistic missiles may also carry countermeasures or adversaries may employ tactics and techniques, both of which are intended to confuse missile defense systems. For example, countermeasures can include penetration aids that are released during flight, such as decoys, which are intended to complicate the ability of missile-tracking sensors and missile defense interceptors to identify the warhead among the multiple objects. Challenging tactics and techniques can include structured attacks, such as simultaneously launching a number of missiles or outfitting a single missile with multiple warheads. In addition, some newer missiles are capable of traveling at greater speeds, performing maneuvers during all phases of flight, and remaining in the atmosphere for longer durations of their flight. These newer missiles, generally referred to as hypersonics, possess a combination of high speed, maneuverability, and relatively low altitude that can make them a challenging target for missile defense systems to track and engage. According to a publicly released intelligence assessment, nearly all adversaries that possess ballistic missiles have devised various means to confuse missile defense systems. Defense Intelligence Community’s Roles in Assessing Missile Threats and Supporting MDA Acquisitions In November 2010, the Defense Intelligence Agency (DIA) established the Defense Intelligence Ballistic Missile Analysis Committee (DIBMAC) to oversee and coordinate intelligence analysis and threat assessment production activities pertaining to foreign ballistic missile developments. Under the leadership of this committee, the defense intelligence community performs important stakeholder, advisor, and oversight functions in support of MDA’s acquisitions by (1) producing threat assessments; (2) providing advice on important threat-related issues pertaining to BMDS acquisition; and (3) validating threat models and reports. Table 1 provides further explanation of these roles and additional information on the defense intelligence community is in appendix I. In November 2013, DOD’s acquisition leadership issued a memorandum that requested DIA work with the acquisition community to produce more timely, relevant, and dynamic defense intelligence community threat assessments for DOD acquisition programs. The memorandum notes that DOD acquisition program officials expressed concerns about the timeliness of threat assessments due to the lengthy process and varying timelines that sometimes left them with threat assessments that did not contain the most up-to-date information. In addition, the defense intelligence community noted its concerns with the significant duplication in producing certain threat assessments, which placed a huge burden on its manpower and resources. Consequently, DOD leadership directed the acquisition customers and defense intelligence community to work together to improve threat assessments and in 2016 the defense intelligence community set forth its planned revisions to threat assessment processes and products. Subsequent revisions include creating a library of threat modules and replacing a former type of threat assessment with a new Validated Online Lifecycle Threat (VOLT) report, among others. These revisions were codified in the defense intelligence community’s policies in September 2016 and in DOD policy in August 2017. However, defense intelligence community officials noted that they are still in the process of implementing these revisions. MDA’s Responsibility for Defending Against Ballistic Missile Threats MDA is developing a variety of missile defense systems, known as elements, including sensors, interceptors, and battle management and communication capabilities. The ultimate goal is to integrate these various elements to function as a layered system called the Ballistic Missile Defense System (BMDS). The BMDS elements, when integrated, are designed to destroy enemy missiles of various ranges, speeds, sizes, and performance characteristics in different phases of flight, as seen in figure 3 below. When MDA was established in 2002, the agency was granted exceptional flexibilities to diverge from DOD’s traditional acquisition lifecycle and defer the application of acquisition policies and laws designed to facilitate oversight and accountability until a mature capability is ready to be handed over to a military service for production and operational use. In particular, MDA was exempted from DOD’s standard requirements-setting process and instead uses a unique and flexible requirements-setting process that is intended to enable MDA to quickly develop and field useful but limited capabilities, which can be incrementally improved over time and adapted to address changes in the threat. MDA also implemented a tailored process that is intended to use defense intelligence community threat assessments in a way that enables the BMDS to defend against a broad range of uncertain and evolving threats. MDA uses defense intelligence community threat assessments as the foundation for developing threat models and establishing wide-ranging critical threat parameters upon which to design, develop, and test the BMDS. Specifically, MDA’s process includes the following: Design: MDA uses threat assessments to select a set of threat models in which it incrementally designs BMDS capabilities to defend against. MDA combines the capabilities from the selected threat models into parameters, forming what MDA refers to as the “parametric threat space.” MDA assigns subsets of the threat space to each of the BMDS elements to inform the design of their respective systems. Development: MDA assigns specific threat models to each of the elements for use in simulations as they are undergoing development. MDA uses these threat models to verify that the element’s system design has the capability necessary to defend against its assigned threat space. Test: Toward the end of BMDS element development, MDA coordinates with the warfighter and test and evaluation communities to select specific threat models for use in testing to assess the performance of the BMDS elements. MDA also uses its threat models to prepare for flight tests to help ensure that the BMDS elements have a high probability of achieving their test objectives, such as successfully intercepting the target. Operational capability: MDA uses its threat models as the foundation for algorithms, which are embedded into the BMDS to enable its sensors and interceptors to determine which object(s) amongst a group of objects (e.g., countermeasures, debris, etc.) is lethal. This capability is referred to as “discrimination.” Mounting Challenges Are Delaying the Availability of Threat Assessments, but Opportunities Exist to Help MDA Receive the Information It Needs Various challenges have recently emerged that have affected the availability of the threat assessments MDA needs to inform the agency’s acquisition decisions. Challenges include an upsurge in threat missile activity, which has increased the overall demand for threat assessments; a transition period as the defense intelligence community works through how to implement recent revisions to its processes and products; and MDA’s request for accelerated support from the defense intelligence community. Defense intelligence community officials say they are contending with all of these challenges without the provision of additional manpower or resources. Consequently, defense intelligence community officials have stated that their manpower and resources are constrained, which can affect the timely delivery of threat assessments to customers, such as MDA. If MDA does not have the threat information it needs when it is needed, the delay of information could result in setbacks for the agency’s weapon system design, development, and testing, or could put the agency in the position of moving forward without the requisite information, thereby increasing the risk of performance shortfalls and costly retrofits. However, MDA has opportunities to mitigate these challenges by collectively prioritizing its threat assessment requests and working through existing venues with the defense intelligence community to determine what additional resources may be needed to secure the accelerated support that it needs. Various Challenges Are Delaying the Availability of Defense Intelligence Community Threat Assessments MDA Uses for Acquisition Design and Testing Decisions Increased Threat Activity One challenge for the defense intelligence community is a recent upsurge in threat missile activity, which has increased MDA’s requests for threat assessments. For example, ballistic missile flight testing has more than doubled from 2005 to 2016, from about 70 tests in 2005 to nearly 180 tests in 2016, and the most notable increases have occurred since 2010 (see figure 4). This upsurge of threat missile activity increases the urgency for the defense intelligence community to provide the requisite type of threat assessments to MDA to enable the agency to counter and defeat such threats; however, defense intelligence community officials have said that manpower and resource constraints have limited their ability to do so. In 2016, we reported on how the defense intelligence community’s manpower and resource constraints have impacted its ability to provide threat assessments. Since then, defense intelligence community officials have said that the manpower and resource constraints have not been resolved, but threat missile activity has increased. For example, some countries have recently displayed or flight tested new threat missiles capable of reaching the United States. When new threat missiles emerge, MDA requests missile-specific threat assessments—known as reference documents—from the defense intelligence community to understand their size, performance characteristics, and signature when detected by a sensor. This detailed information on the threat missiles enables MDA to build the threat models used to design, develop, and test BMDS weapon systems. Defense intelligence community officials have said that, although important, missile-specific threat assessments utilize considerable manpower and resources because they can be labor-intensive, lengthy, and take months, and at times a year or longer, to prepare. According to these officials, one way to minimize the workload and shorten the preparation timeframe is for MDA to differentiate the specific information that it needs from anything that might be extraneous. As a simplified and hypothetical example, defense intelligence community officials explained that MDA may only need some simple, general information about a missile or conversely it may need complex, highly-detailed information on everything about the missile from tip to tail. The amount of time and effort it would take defense intelligence community officials to gather the information in these two scenarios would vary significantly. MDA officials have acknowledged that some extraneous information may be gathered and included in these threat assessments but noted that, at the time they request a threat assessment, they may not yet fully understand what information is essential for their purposes. Therefore, they prefer to have as much information as possible, with the ability to determine whether and how to use it. Defense intelligence community officials, however, told us that they believe this is an inefficient use of their manpower and resources, especially given current constraints. Revisions to Processes and Products Another challenge for the defense intelligence community is the implementation of recent revisions to its threat assessment processes and products, which apply to all DOD acquisition programs. In 2016, in response to the November 2013 memorandum from DOD’s acquisition leadership, the defense intelligence community began overhauling its threat assessment processes and products to produce more timely, efficient, and relevant information. See table 2 for an overview of these revisions. While each of these revisions has potential benefits, defense intelligence community officials have said that implementing the revisions has been more time-consuming and difficult than anticipated, which has affected their ability to provide certain threat assessments to MDA when needed. For example, MDA and the defense intelligence community were initially uncertain about the responsibilities and processes for creating a VOLT report for the BMDS. Although it took some time to resolve these uncertainties, MDA is now compiling its own country-specific threat assessments—known as the BMDS VOLT report—which DIA then validates. The military services generally have their own defense intelligence production centers, and therefore, a means for compiling VOLT reports. MDA, however, uses information from multiple defense intelligence production centers and does not possess its own production center. In September 2017, MDA reached out to DIA on this matter and DIA responded that, per the DOD policy update, it does not see anything that would preclude MDA, as a DOD component, from compiling VOLT reports. DIA stated that MDA compiling its own VOLT report aligns the agency with the rest of the DOD acquisition community. MDA is waiting on threat modules from the defense intelligence community to prepare its preliminary BMDS VOLT report, which MDA will use to inform acquisition decisions. MDA needs specific threat modules from the defense intelligence community, including those for six specific countries, in order to compile its preliminary BMDS VOLT report. However, defense intelligence community officials have said that they are still in the process of creating some of the digitized threat modules MDA needs, because it has taken more time and effort than they expected to standardize the threat modules’ content and coordinate production across multiple defense intelligence community production centers. Consequently, MDA is planning to publish its preliminary BMDS VOLT report in 2019 (table 3). In the meantime, without the preliminary BMDS VOLT report or digitized threat modules used to compile the BMDS VOLT report, MDA is reliant on threat assessments written between 2014 and 2016 for some of its acquisition decisions. For example, MDA recently made design decisions for certain BMDS elements using these threat assessments, although these threat assessments have not yet been updated. Consequently, these weapon systems that MDA recently made design decisions for could have capability gaps or performance shortfalls that present risks for the warfighter. MDA has attempted to fill the void for digitized threat modules and the preliminary BMDS VOLT report by submitting ad hoc requests for threat assessments to the defense intelligence community, but this has only added to the defense intelligence community’s workload and exacerbated delays. Request for Accelerated Delivery of Threat Modules Moving forward, MDA has asked the defense intelligence community to provide the digitized threat modules on an accelerated schedule to ensure the agency can compile BMDS VOLT reports in a timely manner to inform its acquisition decisions; however, some defense intelligence production centers have said that an accelerated schedule will be difficult, if not impossible, without additional manpower and resources. Specifically, MDA wants the defense intelligence community to provide the digitized threat modules every year, as opposed to every two years as required by DOD policy. MDA has stressed the importance of having these digitized threat modules on an accelerated schedule in order to be responsive to threat advancements and mitigate the potential for capability gaps or performance shortfalls in its weapon systems. Defense intelligence community officials have acknowledged MDA’s need to have the digitized threat modules on an accelerated schedule but are concerned about their ability to provide them due to personnel and resourcing issues at some defense intelligence production centers. For example, two defense intelligence production centers have said that MDA’s request for an accelerated schedule is currently unrealistic due to their manpower and resource levels. Defense intelligence officials have said that once the initial digitized threat modules are created, the threat modules will be easier and quicker to update, but whether they can provide them annually is still being determined. Opportunities Exist That Could Help MDA and the Defense Intelligence Community Address Threat Assessment Availability Challenges Although MDA has the capability to centrally and collectively prioritize its threat assessment requests submitted to the defense intelligence community, it currently prioritizes its threat assessment needs through two distinct, individual lanes—country-specific and missile-specific— supplemented by informal discussions with the defense intelligence community. According to MDA, the individual lanes are as follows: 1. Country-specific threat assessments (i.e., threat modules for BMDS VOLT reports) are prioritized via the VOLT Threat Steering Group, which is co-chaired by MDA and DIA. The VOLT Threat Steering Group’s objectives are to determine MDA’s threat module requirements, to achieve concurrence on the threat modules used in the BMDS VOLT report, and to review the BMDS VOLT production schedule. The first VOLT Threat Steering Group meeting was held in April 2018 and during that meeting, MDA presented its prioritized list of threat assessments by adversary country to the defense intelligence community personnel in attendance. 2. Missile-specific threat assessments (i.e., reference documents used to build threat models) are prioritized via an annual intelligence mission data process managed by the Joint Staff. Through the intelligence mission data process, MDA prioritizes the data it needs for threat missiles by most to least critical—119 total threat missiles in 2018. With these two individual lanes for prioritization, MDA treats each type of threat assessment as independent and unrelated. According to MDA, the agency maintains these individual lanes for prioritizing its threat assessment requests because the requests can be more easily managed by the defense intelligence community components that develop the threat assessments. For example, MDA stated that requests for missile-specific threat assessments are often routed to intelligence production centers while requests for country-specific threat assessments are often routed to DIA’s regional centers (see appendix I for more information on defense intelligence community components). According to MDA, the vast majority of new requirements submitted to the defense intelligence community are also accompanied by an informal verbal discussion and if MDA’s priorities shift because a new threat emerges, MDA stated that it can convey that shift to the defense intelligence community in an effort to work out the best path forward. If the defense intelligence community cannot meet MDA’s needs, MDA stated that it works with the defense intelligence community to determine the best course of action for resolving prioritization issues. For example, MDA cited a recent example where it had worked with the U.S. Navy’s Office of Naval Intelligence to develop a threat model production schedule for two threat systems; however, the emergence of a new threat shifted MDA’s priorities. MDA was able to understand the effect of choosing one system ahead of the others based on the priority and projected production timelines. MDA cited another recent example where it had similarly worked with the U.S. Air Force’s National Air and Space Intelligence Center to prioritize production of a threat model for a new, unique threat. After some initial informal discussions and questions about whether the threat model production effort was a top priority for MDA, both agreed in a meeting in January 2019 to lower the priority for the model production effort. The specific threats referenced in the examples above have been omitted because they are classified. However, MDA’s approach of prioritizing its threat assessment needs through individual lanes creates the potential for unresolved, competing priorities because the defense intelligence community produces threat assessments collaboratively rather than disparately. Defense intelligence community officials told us that the underlying analyses that support both country-specific and missile-specific threat assessments are developed and reviewed by many of the same subject matter experts and managers within the defense intelligence community. Defense intelligence community officials told us that they have no way of knowing whether the information to build a specific threat model is a greater or lesser priority than updating a particular threat module needed to support the BMDS VOLT report. Our prior best practices work found that successful commercial companies employ a formal process for prioritizing their investments collectively rather than as independent and unrelated initiatives. MDA instead stovepipes its threat assessment prioritization through individual lanes and informally discusses its collective priorities with the defense intelligence community. Consequently, MDA’s requests, and resulting output from the defense intelligence community, may not be based on the collective order of importance, as depicted in figure 5. MDA relies on both country- and missile-specific threat assessments for its acquisitions, as each characterizes threats in unique ways and for different purposes, and it uses other requests to fill information gaps, as needed. Thus, all of MDA’s requests are important, but one among them may be the most important or urgent due to the timing of an upcoming design or testing decision. In the example illustrated above in figure 5, the most important request is for a country-specific threat assessment; however, it will not likely be the next one out of the defense intelligence community’s queue because there is a missile-specific request ahead of it. Hence, MDA may have the information it needs to build the threat model used to test one weapon system’s performance, but it may delay the country-specific information it needs to make design decisions for another. This delay in the country-specific information could put MDA in a position of moving forward with design decisions without the requisite information or relying on outdated information, which increases the risk for performance shortfalls and costly retrofits. One opportunity that MDA has to address the availability of threat assessments from the defense intelligence community is to collectively prioritize its threat assessment requests based on the order of importance. We have previously identified collective prioritization as a best practice—specifically, that it is important for an agency to regularly evaluate the totality of its needs or tasks, to determine whether specific ones should be prioritized ahead of others, based on the costs, benefits, and risks. While MDA has no formal requirement to collectively prioritize its threat assessment requests, defense intelligence community officials said that they have had discussions with MDA through existing venues and requested that it do so to ensure it has the most urgently needed information. MDA has the capability to collectively prioritize its threat assessment requests because all of the requests go through a centralized intelligence requirements group within the agency’s engineering directorate. This group has insight into the totality of the agency’s threat assessment requests and is uniquely positioned to make determinations about the order of importance among them. As the group submits requests to the defense intelligence community, the defense intelligence community responds to the requests in the order that they were received, because, as we previously found, the defense intelligence community is not required to prioritize the requests, does not currently possess the capability to do so, and would not be in a position to dictate to an agency what is most important. Another opportunity for MDA to address the availability of threat assessments is through further collaboration with the defense intelligence community to determine the extent of additional resources that would be needed to enable accelerated support. When intelligence support requirements exceed the defense intelligence community’s responsibilities, DOD acquisition programs are generally required to account for resources to augment intelligence support. For example, according to defense intelligence community officials, the Air Force is providing one of the defense intelligence community’s production centers with additional resources to collect data and devise tools primarily to support a specific major defense acquisition program via a military interdepartmental purchase request because the program’s request exceeds the defense intelligence community’s responsibilities. According to MDA, intelligence mission data shortfalls are currently identified through an annual departmental review process. MDA stated that in fiscal year 2019 DOD approved budgeting additional funding in the future to help address intelligence mission data shortfalls for all of the military services, including MDA. MDA has not provided the defense intelligence community with additional resources for an accelerated schedule to update threat modules more frequently. MDA has requested that the defense intelligence community update the digitized threat modules it needs to compile a BMDS VOLT report every year to ensure that it has the updated threat information needed for acquisition decisions; however, the defense intelligence community is only required to update the digitized threat modules every two years. Some defense intelligence community officials have acknowledged MDA’s need to have an accelerated schedule, but have communicated to MDA that given its current manpower and resource constraints, the accelerated schedule is unrealistic without additional resources. Thus, MDA’s request for the defense intelligence community to update the digitized threat modules faster exceeds what the defense intelligence community is currently able to do given its manpower and resource constraints. With existing venues, like the VOLT Threat Steering Group, MDA and the defense intelligence community have a forum to further collaborate and identify what additional resources are needed and the potential funding scenarios to support an accelerated schedule for threat module production. Without collaboration through these existing venues, MDA and the defense intelligence community may not be utilizing an available method to ensure their individual needs are met. According to our best practices for inter-governmental agency collaboration, it is important for the inter-reliant agencies to collaboratively identify the resources— information, manpower, and funding—needed to accomplish their respective missions. Doing so enables the agencies to have a common understanding and explore opportunities to leverage each other’s resources; thus, realizing benefits that would not be available if they were working separately. Therefore, working together, MDA and the defense intelligence community would be better positioned to determine how to best meet their respective needs. Opportunities Exist for MDA to Further Engage the Defense Intelligence Community on BMDS Acquisition to Address the Challenges of Keeping Pace with the Threat MDA uses defense intelligence community threat assessments to inform its acquisitions, but the agency has not fully engaged the defense intelligence community on challenges in preparing the BMDS for existing and emerging threats. According to MDA, the rapid pace of threat evolution presents significant challenges for the agency to sufficiently plan for emerging threats. Although the defense intelligence community is uniquely positioned to assist MDA in addressing these challenges, the agency generally limits the defense intelligence community’s insight into and input on critical threat-related BMDS acquisition processes and decisions, such as establishing the BMDS threat space and assigning threat parameters and threat models to BMDS elements. Major defense acquisition programs are generally required to engage the defense intelligence community on how to design and test weapon systems, but MDA generally does not, due to the acquisition flexibilities DOD has granted to the agency. Moreover, DIA is currently unable to validate MDA’s threat models, as required by DOD policy, because MDA does not follow the department’s best practices on models and simulations. MDA has steadily increased its outreach to the defense intelligence community and other stakeholders over the past few years, but opportunities remain for more comprehensive engagement on key challenges the agency faces with keeping pace with the threat. MDA Faces Challenges in Preparing the BMDS for Existing and Emerging Threats According to MDA, the rapid pace of threat evolution presents significant challenges for the agency to sufficiently plan for emerging threats. MDA currently faces some difficult choices regarding what steps it needs to take and in what order to address recent threat advancements. In making these decisions, MDA has an opportunity to engage the defense intelligence community on whether and how it should make changes to the BMDS threat space, threat parameters, and threat models the agency uses as design requirements and test cases for BMDS elements. As previously noted, the defense intelligence community plays important stakeholder, advisor, and oversight roles for MDA’s acquisitions. Although the department has provided MDA with flexibilities on following many of the requirements that specifically define when and how major defense acquisition programs are to engage the defense intelligence community, DOD policy requires MDA to vet its threat models and consult with the defense intelligence community on threat-related acquisition matters. DOD, senior defense officials, and expert panels supported by DOD have consistently maintained that the defense intelligence community’s direct involvement in MDA’s acquisitions is critical to staying ahead of the threat: In a written response following a 2002 congressional hearing, a senior defense official stated that every effort was being made to coordinate development of the document establishing the BMDS threat space with the defense intelligence community and that the defense intelligence community’s participation was critical to the agency’s success. In 2010, DOD’s Ballistic Missile Defense Review similarly found the need to maintain a strong focus by the defense intelligence community on the ballistic missile threat and that accurate and timely intelligence should play a vital role in informing BMDS planning. In 2010, an expert panel known as JASON (not an acronym) found that MDA lacked sufficient plans for improving discrimination and that the agency risked falling behind the evolution of the threat’s countermeasure capabilities. The study recommended that DOD form stronger two-way connections between MDA and defense intelligence agencies. In 2012, the National Research Council found that MDA did not follow through on efforts to improve discrimination and that much of the agency’s expertise on discrimination was lost in the late 2000s. The study recommended that MDA seek assistance from experts with experience in understanding sensor data for threat missiles. In 2018, DOD’s National Defense Strategy stated that modernizing missile defense, among other items, was necessary to keep pace with adversaries and that the department must expand the role of intelligence analysis throughout the acquisition process in order to streamline rapid, iterative approaches for delivering performance at what DOD refers to as “the speed of relevance.” During a 2018 congressional hearing, the Under Secretary of Defense for Research and Engineering stated that catching up to near-peer adversaries in missile defense can be achieved by exceeding their technical capabilities and that the intelligence community was critical to making sure that we are outpacing our adversaries. MDA Limits the Defense Intelligence Community’s Insight Into and Input on Some Critical Threat- Related BMDS Acquisition Processes and Decisions Although MDA uses defense intelligence community threat assessments to inform BMDS acquisition, the defense intelligence community generally has limited insight into the BMDS, which is unprecedented among major defense acquisition programs. When MDA was established in 2002, DOD granted the agency exceptional flexibilities to diverge from the standard acquisition framework that most major defense acquisition programs follow. These flexibilities enable MDA to forego obtaining the defense intelligence community’s input on some critical threat-related BMDS acquisition processes and decisions, such as how MDA establishes the: threat space that informs overall BMDS design and development; threat parameters assigned to each BMDS element as design requirements; and threat models assigned to each BMDS element as test cases for design reviews and testing. However, according to MDA, the new BMDS VOLT report will serve as the source document for specific details on the BMDS threat space, threat parameters, and threat models. Although MDA may leverage the defense intelligence community’s threat assessments, MDA has not included the defense intelligence community in these key threat-related BMDS acquisition processes and decisions. For example, in response to a questionnaire we sent to MDA in May 2018, agency officials stated that decisions related to the threat parameters it assigns to the different BMDS elements should be left to MDA, as it is within the agency’s purview and authority to design threats as it deems necessary for research, development, test, and evaluation purposes. Moreover, MDA indicated that the defense intelligence community should provide the agency with the best intelligence information on adversary missile capabilities, in a timely manner, to support the agency’s mission. As such, MDA stated it does not support obtaining the defense intelligence community’s concurrence on the threat parameters it assigns to the BMDS elements. MDA has provided the defense intelligence community with some insight into the BMDS but not to the same extent DOD generally requires of major defense acquisition programs. For example, MDA has held a number of “immersion days” over the past nine years, which allow the defense intelligence community to receive briefings from MDA programs on priorities, future developments, and weapon system operations. According to MDA, it also assigns intelligence portfolio managers to BMDS elements and their mission, among other items, is to keep the defense intelligence community informed on key program developments and how intelligence feeds into the agency’s threat-related acquisition processes and decisions. In addition, MDA has briefed the DIBMAC on how it uses threat assessments to inform BMDS acquisition. However, defense intelligence community officials stated that they generally lack fundamental information on the BMDS and have no visibility into the BMDS threat space, threat parameters, or test cases MDA assigns to the BMDS elements. In contrast, for most major defense acquisition programs, the defense intelligence community is integrally involved in determining the: threat(s) of record upon which requirements of the weapon system are based; key performance parameters and attributes of the weapon system; threat parameters that could critically degrade or negate the weapon system; and operational threat environment the weapon system is tested against. These insights, enabled by DOD’s standard requirements-setting process and acquisition framework, are intended to provide the defense intelligence community with in-depth knowledge of the design and performance requirements for most major DOD weapon systems. Officials from other various organizations we met with, such as the Joint Staff, contractors, warfighters, and test and evaluation, expressed concerns about MDA’s ability to unilaterally define the threats it designs the BMDS against. As one MDA prime contractor told us, what really matters is how the BMDS would perform in the real world against real threats. Defense intelligence community officials acknowledged that MDA, as the BMDS developer, has a legitimate need to explore threat capabilities beyond those that the intelligence community has observed from specific adversaries. However, defense intelligence community officials rejected a sentiment expressed to us by MDA officials that the defense intelligence community lacks expertise in understanding the bounds of threat capabilities. To the contrary, according to defense intelligence community officials, this is exactly the type of analysis at which the defense intelligence community excels. In choosing not to engage the defense intelligence community on these key threat-related BMDS acquisition processes and decisions, MDA runs the risk of not sufficiently planning for existing and emerging threats. MDA’s reluctance to provide the defense intelligence community with insight into or input on some threat-related BMDS acquisition processes and decisions is consistent with how MDA has engaged other DOD stakeholders and oversight groups. Our prior work on defense acquisitions has shown that establishing buy-in from decision makers is a key enabler of achieving better acquisition outcomes because DOD components provide varying perspectives due to their unique areas of expertise and experience. However, in May 2017, we found that MDA generally limits the warfighter’s input on the requirements it pursues and overlooked stakeholder concerns on the acquisition strategy for a redesigned kill vehicle for the Ground-based Midcourse Defense system. We made recommendations aimed at increasing stakeholder engagement and oversight in BMDS acquisition, such as coordinating operational requirements with the warfighter and obtaining input from DOD’s Office for Cost Assessment and Program Evaluation (CAPE) on acquisition strategies for new efforts. DOD’s acting Assistant Secretary of Defense (Acquisitions) did not concur with the recommendations, stating that warfighters lacked the skillset to determine operational BMDS requirements and existing DOD policy does not require MDA to obtain CAPE’s concurrence on acquisition policies. We continue to maintain that DOD should implement the recommendations. DIA Is Currently Unable to Validate MDA’s Threat Models, as Generally Required by DOD Policy MDA builds its own threat models to support BMDS design, development, and testing but it does not validate its threat models with DIA, which is inconsistent with DOD policy and best practices. Although the defense intelligence community builds threat models, MDA cannot currently use those models as-is because they are generally not compatible with MDA’s modeling and simulation framework. Even with MDA using its own threat models, DOT&E has found that integrating the various BMDS models and presenting them with a common threat scene has been an extremely challenging task for MDA. Moreover, MDA’s BMDS modeling and simulation architecture requires highly detailed threat models for simulations to function properly. Defense intelligence community officials stated that they generally do not need the same level of detail MDA requires for the types of analyses the defense intelligence community performs. In addition, according to a March 2018 MDA memorandum, the agency was previously told by representatives of the DIBMAC that they do not have the staff or resources to produce the high volumes of detailed threat models that MDA needs to support BMDS development and testing. Therefore, MDA continues to build its own threat models for use in BMDS development and testing. MDA uses defense intelligence community threat assessments to build its threat models, but independent evaluators have not been able to fully trace MDA’s threat models to defense intelligence community threat assessments. According to a briefing MDA presented to the defense intelligence community in September 2018, every target, model, and test can be traced back to defense intelligence data. However, in August 2018, the U.S. Army issued a memorandum for MDA stating that the BMDS Operational Test Agency (OTA)—the agency responsible for independently analyzing the verification and validation data for models used in operational testing—was only able to certify some of the threat models used in a recent ground test. In other ground tests, though, the BMDS OTA was able to trace MDA’s threat models back to defense intelligence community threat assessments. In February 2019, DOT&E reported that (a) credible threat models are the linchpins of BMDS models and simulation; (b) reducing threat model uncertainty is a high priority; and (c) MDA and the BMDS OTA should ensure that MDA-developed threat models are representative of the defense intelligence community’s understanding of the threat. MDA also has not implemented best practices established by DOD’s Models and Simulation Coordination Office that would enable DIA to be in a position to validate MDA’s threat models. According to DOD best practices on modeling and simulation, the validation agent should: (1) be brought on in the beginning of the modeling and simulation development process; (2) work closely with the model developers as the models are built and tested; and (3) perform validation as a continuing activity of the overall process of developing and preparing a model for use or reuse in a simulation. Conversely, defense intelligence community officials stated that they lack sufficient insight into and input on how MDA builds and uses threat models. For example, the defense intelligence community has emphasized to MDA that caveats need to be carried through with the model data and voiced concerns about the engineering judgments the agency makes in its threat models, because these judgments could lead to the BMDS performing well or poorly for reasons not based on the actual threat. Given these uncertainties and the defense intelligence community’s lack of insight into the purposes for which MDA uses its threat models, DIA lacks the insight and input necessary to validate MDA’s threat models. Although MDA has previously expressed interest in validating its threat models with the defense intelligence community, long-standing obstacles remain. During a May 2018 meeting between MDA and the DIBMAC, defense intelligence community officials identified the lessons they have learned from working with other acquisition programs to validate threat models. Model validation can be achieved if the acquisition program: establishes a partnership with the defense intelligence community; prioritizes its threat modeling needs; recognizes there are limits to how many threat models can be built in a given time; provides in-depth insight into its threat modeling needs and weapon system’s capabilities; discusses how the models will be applied; jointly defines model acceptance criteria early in the process; provides resources, including funding and staff; and invests in the defense intelligence community’s capability and capacity. MDA officials stated that the agency desires to have its threat models validated but noted that the defense intelligence community does not validate models produced by other organizations. MDA officials also emphasized that the defense intelligence community cannot meet MDA’s timeline for building threat models, whereas the agency can. In addition, MDA officials indicated to us that they do not believe it is practical to provide the amount of insight defense intelligence community officials told us they would need in order to validate MDA’s threat models. MDA officials told us that the only way in which the defense intelligence community could obtain such insight is by being co-located with MDA’s threat modelers as the models are being built. However, the 2010 JASON study found that this type of close working arrangement between MDA engineers and defense intelligence analysts is necessary to effectively plan for emerging threats. Defense intelligence community officials also clarified for MDA that the defense intelligence community can validate models produced by another agency but it would require the defense intelligence community having detailed knowledge of everything used to produce the model. As a result, although DOD policy generally requires that threat models used to support acquisition decisions be validated by DIA, MDA has yet to validate any of the numerous threat models it has developed since 2004. Without independent validation, MDA runs the risk that DOD and congressional decisionmakers may not have confidence that the agency’s plans and proposals for developing the BMDS are appropriate and sufficient to address the threat because any flaws or bias in MDA’s threat models can have significant implications on the BMDS’s overall performance. According to a Federally Funded Research and Development Center publication describing its efforts supporting MDA threat modeling, acquisition influences can place pressure on MDA threat modelers to tailor the missile threats to suit the currently feasible BMDS design. In May 2017, we found a parallel circumstance where, in the absence of warfighter validation of MDA-established requirements, the agency made critical design choices for three new BMDS efforts. These design choices reflected the needs and preferences of MDA ahead of the warfighter, potentially compromising performance to the extent of not being able to defeat current and future threats. MDA Has Steadily Increased Its Outreach to DOD Stakeholders Over the Past Few Years, but Opportunities Remain for Further Engagement MDA has undertaken a number of efforts over the past few years to generally increase stakeholder involvement in BMDS acquisition. The engagement efforts, in large part, are a result of efforts led by MDA’s previous director to improve the agency’s relationship with department stakeholders. In addition to previously serving as the Deputy Director for MDA, the Director also held a variety of assignments in operational, acquisition, and staff units within DOD. When we met with the MDA Director in March 2018, he told us that he wanted to change the agency’s culture of limiting stakeholder input, noting that he had recently provided updated guidance to his leadership team and agency personnel on bringing stakeholders in early, engaging them more frequently and substantively, and ensuring that the agency has obtained their buy-in on major undertakings. The MDA Director also stated that he was willing to take some actions that could effectively address a recommendation we made in May 2017 intended to provide the warfighter with greater input on operational requirements for ballistic missile defense. Officials from several DOD organizations we met with over the course of our review observed that MDA’s engagement with their respective organizations was improving. In 2018, MDA began working with the defense intelligence community to determine a more appropriate level of involvement for the defense intelligence community throughout MDA’s acquisition activities. MDA and defense intelligence community officials agreed during a May 2018 meeting that processes could be put in place to develop intelligence-based countermeasure assessments if adequate resources are provided. MDA officials also acknowledged that the defense intelligence community would benefit from having a better understanding of how the BMDS responds to threats and agreed to work towards providing such information. Defense intelligence community officials stated that increased insight would allow them to better focus their intelligence collection, analysis, and production by knowing which threat parameters MDA most often uses and the specificity of those parameters. The defense intelligence community and MDA also agreed that providing defense intelligence community engineers with MDA program-level access would improve the support the defense intelligence community provides to MDA. MDA has also recently increased its outreach to the defense intelligence community on some early BMDS planning decisions, although opportunities for more comprehensive engagement remain. For example, MDA engaged the defense intelligence community on an analysis of alternatives the agency completed in February 2017 that assessed future sensor options for the BMDS. According to MDA officials, they are also engaging the defense intelligence community on another analysis of alternatives pertaining to defense against hypersonic missiles. In addition, MDA worked with the defense intelligence community to establish threat space parameters for some specific threat systems. Also, as noted earlier, over the last nine years, MDA has held 18 “immersion day” events with the defense intelligence community, half of which occurred in the last two years. Moving forward, MDA has opportunities to more comprehensively engage the defense intelligence community on updating the BMDS threat space and determining threat parameters and threat models assigned as design requirements and test cases for BMDS elements. In addition, MDA has recently begun placing greater emphasis on ensuring its models are credible. According to an internal MDA memorandum signed by the MDA Director in April 2018, a culture exists within the agency that generally tolerates the use of models that have not been sufficiently vetted and is too willing to accept the associated risk. The memorandum states that the agency’s goal is for all MDA personnel to help address this culture problem and that model verification, validation, and accreditation is a high priority for MDA. During a meeting with the BMDS OTA in October 2018, officials confirmed that MDA is taking steps to address the challenges raised in the memorandum. MDA also increased its outreach to the defense intelligence community in 2016 to coordinate on threat modeling efforts. In the past three years, MDA and the defense intelligence community have collaborated to quickly model several newly-observed threat missiles, according to MDA. Figure 6 below shows that MDA held 93 threat model coordination meetings with the defense intelligence community over the last four years, with more frequent meetings occurring in early 2016 and again in early-to-mid 2018. In addition, MDA is working with the defense intelligence community to address compatibility issues that currently prevent MDA from directly using the defense intelligence community’s threat models in BMDS ground testing. MDA plans to include a few missile trajectory models produced by the defense intelligence community in the models and simulation framework for the agency’s upcoming Ground Test-08 campaign. The Technical Interchange Meetings and pathfinder efforts for MDA directly using defense intelligence community threat models are improving collaboration between MDA and the defense intelligence community on threat modeling efforts. However, they do not provide MDA with a pathway for validating its threat models with DIA. Even if compatibility issues that currently prevent MDA from using defense intelligence community threat models could be resolved, the defense intelligence community is currently not resourced to build threat models for MDA. Moreover, although MDA has indicated that the Technical Interchange Meetings can include any topic of interest, the meetings do not provide defense intelligence officials with sufficient insight into how MDA builds its models, including the assumptions, caveats, or intended use of the models. According to MDA, the agency continues to hold discussions with the defense intelligence community and explore process improvements, as well as technical and resource requirements, to ensure the creation of valid, threat-representative models for BMDS development. In March 2018, the MDA Director told us that one of his priorities was to ensure that the agency was using appropriately validated models and acknowledged the importance of ensuring its threat models are sufficiently representative. In April 2018, MDA subsequently began holding meetings with the DIBMAC to define the issues preventing the defense intelligence community from validating MDA’s threat models. MDA and the defense intelligence community met five times in 2018 to identify actions that would facilitate working together to develop threat models the defense intelligence community would be comfortable validating. During these meetings, both organizations agreed on specific actions intended to increase the defense intelligence community’s involvement in MDA’s threat modeling process. To achieve threat model validation, an initial plan was developed that included a combination of (a) MDA directly using aspects of defense intelligence community threat models; and (b) MDA partnering with the defense intelligence community to build threat models. MDA and the defense intelligence community plan to hold follow-on meetings in 2019 to further discuss the plan and review actions. Conclusions MDA is reliant on threat assessments from the defense intelligence community, as they inform what weapon systems the agency pursues, the design of those systems, and how those systems are tested prior to being delivered to the warfighter for operational use. However, the defense intelligence community has been facing a variety of challenges that are affecting its ability to provide MDA the threat assessments it needs, when it needs them. If MDA does not have the threat assessments it needs, when needed, the agency’s weapon systems are at risk of being designed or tested against irrelevant or outdated information, which could result in performance shortfalls and costly retrofits. MDA has opportunities to mitigate these challenges and risks by collectively prioritizing its threat assessment requests and working through existing venues with the intelligence community to determine if and to what extent additional resources may be needed to secure the support that it needs. If MDA does not take advantage of these opportunities, the defense intelligence community’s challenges will likely continue, which will impact the availability of threat assessments and increase the likelihood that MDA’s weapon systems will not be designed or tested against the most up-to-date threat information. In addition, MDA faces a steep challenge in developing the BMDS and fielding capabilities at a rate that keeps pace with the threat. MDA was previously informed by expert panels and senior defense leaders that it needed to work more closely with the defense intelligence community to better prepare for future threats or risk falling behind the threat. Given these challenges, it is imperative for MDA to make the most out of its available resources. Aside from providing MDA with threat assessments, the defense intelligence community is a resource MDA has yet to fully tap into. The defense intelligence community is uniquely qualified to assist MDA on fundamental and critically important BMDS acquisition processes and decisions, such as establishing the BMDS threat space and the threat parameters and models it assigns to the BMDS elements. Moreover, after nearly 15 years of building numerous threat models, MDA has yet to fully implement a plan for DIA to validate these threat models, as generally required by DOD policy. However, MDA has recently begun laying the groundwork for more comprehensive engagement with the defense intelligence community through efforts which have the potential to address long-standing obstacles that have prevented DIA from validating MDA’s threat models. Resolving these issues would help MDA keep pace with emerging threats and improve the BMDS’s viability to defend against the complex missile threats of the future. Recommendations for Executive Action We are making a total of three recommendations to DOD: The Director, MDA should coordinate with the defense intelligence community on the agency’s collective priorities for threat assessments and work with the defense intelligence community to determine if additional resources are needed to support the agency’s threat assessment needs. (Recommendation 1) The Director, MDA should fully engage the defense intelligence community on key threat-related missile defense acquisition processes and decisions, including providing insight into and obtaining input from the defense intelligence community on the threat space MDA establishes for the BMDS and the threat parameters and threat models MDA assigns to BMDS elements as design requirements and test cases. (Recommendation 2) The Secretary of Defense should require the Director, MDA and the Director, DIA to coordinate on establishing a process for MDA to obtain validation of its threat models. (Recommendation 3) Agency Comments and Our Evaluation DOD provided written comments in response to the classified version of this report (GAO-19-92C), indicating that the department concurred with all three of our recommendations. An edited version of DOD’s comments is reprinted in appendix II as some information had to be omitted due to classification. In addition, the summarized version of DOD’s comments below is reflective of the content in the classified version. DOD provided us with technical comments and a significant amount of new information in response to the classified version of this report. We incorporated this information into our report, as appropriate, but the new information did not substantively change our findings and did not alter our recommendations. Although DOD concurred with our third recommendation, DOD also raised concerns about statements in our report related to our third recommendation that the department believes are inaccurate. We do not believe DOD’s concerns are warranted because our findings are based on evidence we obtained during our review—evidence that we believe is sufficient and appropriate and provides a reasonable basis for our findings and conclusions. We address this in further detail below. DOD concurred with our first recommendation that the Director, MDA should coordinate with the defense intelligence community on the agency’s collective priorities for threat assessments and determine whether additional resources are needed. In its response, DOD stated that MDA will continue to follow established processes to identify threat assessment needs and to determine if additional resources are required. However, our review found that these established processes—prioritizing exclusively through distinct, individual threat assessment lanes—have not proven entirely effective. In addition, although MDA has participated in the department’s intelligence mission data review process since 2016, the agency has yet to provide the defense intelligence community with additional resources to address known funding and manpower shortages. Moreover, this review process is limited to intelligence mission data and does not cover all of the other types of threat assessments that MDA needs. As such, we maintain that MDA should take additional steps beyond continuing existing processes to address the challenges MDA currently faces in obtaining the threat assessments it needs, when it needs them. DOD also concurred with our second recommendation that the Director, MDA should provide insight into and obtain input from the defense intelligence community on the threat space MDA establishes for the BMDS and the threat parameters and threat models the agency assigns to BMDS elements as design requirements and test cases. DOD stated in its response that MDA has and will continue to fully engage the defense intelligence community on key threat-related missile defense acquisition processes and decisions. The efforts MDA has recently undertaken to expand its outreach to the defense intelligence community are positive steps. However, we have yet to see MDA provide the defense intelligence community with further insight into or input on the threat space the agency has established for the BMDS or the assignment of threat models and threat parameters to BMDS elements. We will continue to monitor MDA’s ongoing efforts to see whether it takes this next step toward more fully engaging the defense intelligence community. DOD concurred with our third recommendation that the Secretary of Defense should require the MDA and DIA Directors to coordinate on establishing a process for MDA to obtain validation of its threat models. In its response, DOD stated that the department will re-examine the most cost-effective approach to meet the intent of DIA validation to support development and fielding of effective BMDS elements. More specifically, DOD stated that MDA and the DIBMAC are currently having extensive discussions regarding how the defense intelligence community can best support MDA’s threat modeling requirements. As noted in our report, the discussions MDA has had with the defense intelligence community over the course of 2018 demonstrate that the department is beginning to consider substantive measures to address the long-standing issue of MDA not using DIA-validated threat models. However, MDA and defense intelligence community officials have also cautioned that obstacles remain and that alternative solutions may need to be explored. We will continue to monitor these ongoing discussions and any results that emerge. DOD also stated in its response that it was concerned that statements in our report pertaining to our third recommendation imply that MDA has not coordinated with DIA on validating its threat models and that our report could be interpreted as saying MDA does not internally conduct threat model validation. To be clear, our review did, in fact, find that, until recently, MDA did not sufficiently coordinate with DIA on establishing a process for creating valid threat models for use in MDA simulations. Furthermore, we explain in our report that MDA was told that the defense intelligence community can validate MDA’s threat models if it has sufficient insight into how MDA builds its models—insight which MDA officials previously told us was unnecessary. Additionally, although MDA may internally validate its threat models for each ground test, the BMDS OTA was not able to certify many of those threat models, in part, because some models could not be traced back to the defense intelligence community’s threat assessments. We therefore excluded MDA’s internal threat model validation process from our report, as it is not a comparable substitute for DIA threat model validation. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix Ill. Appendix I: Defense Intelligence Components Responsible for Assessing Foreign Ballistic Missiles In its entirety, the intelligence community is a federation of 17 agencies and organizations that span the executive branch of the U.S. government. The defense intelligence components responsible for assessing foreign ballistic missile threats are headed by the Defense Intelligence Agency and overseen and coordinated by the Defense Intelligence Ballistic Missile Analysis Committee. Table 4 below identifies each component and its respective focus areas. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, LaTonya Miller (Assistant Director), Rose Brister, Lori Fields, Laura Greifner, Kurt Gurka, Helena Johnson, Kevin O’Neill, Jay Tallon, Brian Tittle, Hai Tran, Alyssa Weir, and Robin Wilson made key contributions to this report.
Why GAO Did This Study MDA is developing missile defense capabilities to defend the United States, deployed forces, and regional allies from missile attacks. However, missile threats continue to emerge, as adversaries continue to improve and expand their missile capabilities. The National Defense Authorization Act for Fiscal Year 2012 included a provision that GAO annually assess and report on the extent to which MDA has achieved its acquisition goals and objectives, and include any other findings and recommendations. This report is a public version of a classified report GAO issued in May 2019, which addresses (1) the challenges MDA and the defense intelligence community face in meeting the agency's threat assessment needs and (2) the extent to which MDA engages the defense intelligence community on missile defense acquisitions. GAO reviewed MDA's threat-related acquisition processes and interviewed relevant officials from the defense intelligence community, MDA, test community, and warfighters. Information deemed classified by DOD has been omitted. What GAO Found The Missile Defense Agency (MDA) is experiencing delays getting the threat assessments needed to inform its acquisition decisions. Officials from the defense intelligence community—intelligence organizations within the Department of Defense (DOD)—told GAO this is because they are currently overextended due to an increased demand for threat assessments from a recent upsurge in threat missile activity, as well as uncertainties related to their transition to new threat processes and products. The delays are exacerbated because MDA does not collectively prioritize the various types of threat assessment requests submitted to the defense intelligence community or provide resources for unique requests, as other major defense acquisition programs are generally required to do. Without timely threat assessments, MDA risks making acquisition decisions for weapon systems using irrelevant or outdated threat information, which could result in performance shortfalls. MDA has increased its outreach to the defense intelligence community over the past few years, but opportunities remain for further engagement on key threat­related processes and decisions. Specifically, MDA provides the defense intelligence community with limited insight into how the agency uses threat assessments to inform its acquisition decisions. MDA is not required to obtain the defense intelligence community's input, and instead has discretion on the extent to which it engages the defense intelligence community. However, the defense intelligence community is uniquely positioned to assist MDA and its involvement is crucial for helping MDA keep pace with rapidly emerging threats. Moreover, this limited insight has, in part, prevented the defense intelligence community from validating the threat models MDA builds to test the performance of its weapon systems. Without validation, any flaws or bias in the threat models may go undetected, which can have significant implications on the performance of MDA's weapon systems. MDA and the defense intelligence community recently began discussing a more suitable level of involvement in the agency's acquisition processes and decisions. Note: the threat missile coverage depicted is notional and not representative of MDA's actual threat coverage. What GAO Recommends GAO is making three recommendations to improve how MDA: prioritizes and resources its threat assessment needs; obtains input from the defense intelligence community on key threat-related processes and decisions for missile defense acquisitions; and validates its threat models. DOD concurred with all three recommendations, citing actions it is already taking. While DOD has taken some positive steps, GAO believes more action is warranted.
gao_GAO-19-643
gao_GAO-19-643_0
Background Land Management Agency Law Enforcement Divisions Federal land management agencies have law enforcement divisions that protect their employees and secure their facilities across nearly 700 million acres of federal lands (see fig. 1). To do so, the four agencies’ law enforcement divisions employ uniformed law enforcement officers who patrol federal lands, respond to illegal activities, conduct routine investigations, and, depending on the agency, may also provide expertise in assessing facilities’ security. Each agency also maintains a law enforcement data system in which law enforcement officers record and track incidents of suspected illegal activity on federal lands. These systems can be used in conducting investigations, identifying trends in crime data, and assisting with decision making regarding staffing, resource allocations, and budgetary needs. BLM. BLM’s Office of Law Enforcement and Security is charged with promoting the safety and security of employees and visitors, as well as environmental protection, across approximately 245 million acres of BLM lands in 12 states. At the end of fiscal year 2018, BLM had 194 field law enforcement officers engaged in such duties. According to agency documentation, these law enforcement officers also coordinate with state agencies and county law enforcement officers on large-scale recreational events, such as Burning Man. These field law enforcement officers may also be tasked with conducting facility security assessments. FWS. FWS’s division of Refuge Law Enforcement helps ensure the safety and security of visitors, employees, government property, and wildlife and their habitats on approximately 150 million acres of land. At the end of fiscal year 2018, FWS had 231 field law enforcement officers on the agency’s 567 wildlife refuges. According to agency documents, FWS law enforcement officers serve as ambassadors by providing important services to the public beyond law enforcement, such as providing visitors with information and guidance regarding fishing, hunting, hiking, and wildlife viewing opportunities. These field law enforcement officers may also be tasked with conducting facility security assessments. Forest Service. The Forest Service’s Law Enforcement and Investigations division is charged with protecting natural resources, employees, and visitors on approximately 193 million acres of National Forest System lands in 44 states. At the end of fiscal year 2018, the Forest Service had 417 field law enforcement officers. Additionally, law enforcement officers may be tasked with conducting facility security assessments. Park Service. The Park Service’s division of Law Enforcement, Security, and Emergency Services is charged with protecting resources, managing public use, and promoting public safety and visitor enjoyment across the agency’s 85 million acres, 418 park units, 23 national scenic and national historic trails, and 60 wild and scenic rivers. At the end of fiscal year 2018, the Park Service had 1,329 field law enforcement officers stationed at 240 of the Park Service’s units. Field law enforcement officers may also be tasked with conducting facility security assessments. ISC’s Facility Security Assessment Requirements The ISC Standard applies to all facilities in the United States occupied by federal employees for nonmilitary purposes, including federal land management agencies’ facilities. This includes existing facilities, new construction, or major modernizations; facilities owned, to be purchased, or leased; stand-alone facilities; special-use facilities; and facilities on federal campuses. Among other things, the ISC Standard requires agencies to assess the risks faced by each of their facilities. According to Department of Homeland Security officials, since 2010, executive departments and agencies responsible for protecting their own facilities have been required to conduct facility security risk assessments as part of the ISC Standard’s risk management process. The ISC Standard states that risk is a measure of potential harm from an undesirable event that encompasses threat, vulnerability, and consequence. The ISC Standard then defines these terms as follows: Undesirable event: An incident, such as vandalism, active shooters, and explosive devices that has an adverse impact on the facility occupants or visitors, operation of the facility, or mission of the agency. Threat: The intention and capability of an adversary to initiate an undesirable event. Vulnerability: A weakness in the design or operation of a facility that an adversary can exploit. Consequence: The level, duration, and nature of the loss resulting from an undesirable event. Based on the assessed level of risk, the ISC Standard provides a method for agencies to identify which countermeasures, such as security cameras or security gates, should be implemented to protect the facility against each of the undesirable events. According to the ISC Standard, once an initial assessment is completed, facility security reassessments should be conducted at least once every 3 to 5 years, depending on the facility’s security level, to reassess whether existing countermeasures remain adequate for mitigating risks. Beginning in fiscal year 2020, the ISC will require departments and agencies to report their compliance with the requirement to conduct facility security assessments on occupied facilities. Figure 2 shows the steps of the ISC Risk Management Process, and figure 3 shows some examples of facility countermeasures. Because facility security assessments are a key component of the ISC’s risk management framework, the ISC Standard includes requirements for agencies’ risk assessment methodologies. Specifically, among other things, the ISC Standard requires that agencies use facility security assessment methodologies that (1) consider all 33 of the undesirable events identified in the ISC Standard, and (2) evaluate the three factors of risk (threat, vulnerability, and consequence) to each undesirable event. During facility security assessments, ratings are assigned to the threat, vulnerability, and consequence of an undesirable event, and the combined ratings produce an overall measurement of risk. In our hypothetical facility security assessment example shown in figure 4, each component of risk is assigned a rating of between 1 (very low) and 5 (very high) based on the facility’s conditions. These ratings are then multiplied to produce an overall estimate of risk for each undesirable event. Agencies can use this and other information resulting from a facility security assessment to make security-related decisions and direct resources to implement countermeasures to address unmitigated risk. Available Data Show a Range of Threats and Assaults against Land Management Agency Employees, but Not All Incidents are Captured in the Data Available federal law enforcement data show a range of threats and assaults against the four federal land management agencies’ employees in fiscal years 2013 through 2017. For example, incidents ranged from threats conveyed by telephone to attempted murder against federal land management agency employees. Additionally, FBI data on its investigations into potential domestic terror threats to land management agencies show a wide variety of statutes and regulations that may have been violated. However, not all incidents are captured in the federal land management agencies’ data because not all incidents are reported to the agencies’ law enforcement officials. Additionally, some incidents are investigated by state or local law enforcement and recorded in their data systems rather than in land management agencies’ systems. As a result, the number of actual threats and assaults is unclear and may be higher than what is represented in available data. Our analysis of data from each of the four land management agencies and the FBI showed the following: BLM. BLM data for fiscal years 2013 through 2017 included 88 incidents of threats and assaults against BLM employees and cited eight different statutes or regulations. A federal law prohibiting people from assaulting, resisting, or impeding certain federal officers or employees, 18 U.S.C. § 111, was the statute most frequently cited in BLM’s data. Examples of incidents that identified this statute include an individual harassing a BLM law enforcement officer by repeatedly swerving and cutting off the officer on the highway, an individual making threats against a BLM employee on Facebook and YouTube, and an incident during which an employee was stabbed outside a federal building. Twenty-one of the 88 incidents occurred in fiscal year 2013, when BLM categorized incidents using uniform crime reporting codes rather than federal statutes, regulations, or state laws. These incidents include, for example, an incident in which an individual attempted to murder a law enforcement officer with a firearm. Table 1 provides additional information on threats and assaults against BLM employees for fiscal years 2013 through 2017. FWS. FWS data for fiscal years 2013 through 2017 included 66 incidents of threats and assaults against FWS employees and cited nine different statutes and regulations. A federal law prohibiting people from assaulting, resisting, or impeding certain federal officers or employees, 18 U.S.C. § 111, was the statute most frequently cited in FWS’s data and included a variety of incidents, such as a law enforcement officer who was assaulted with a tree branch during a suspected drug trafficking incident at the border. According to FWS officials, when law enforcement officers cite violations of state statutes, they enter the violation into the law enforcement data system under a generic description such as “Assault: simple, on officer,” and then manually enter the relevant state statute. Of the total FWS incidents, 26 were recorded under unspecified state statutes. These incidents included, for example, an officer who was assaulted while arresting an individual driving under the influence and an officer who received a death threat during an arrest. Table 2 provides additional information on threats and assaults against FWS employees for fiscal years 2013 through 2017. Forest Service. Forest Service data for fiscal years 2013 through 2017 included 177 incidents of threats and assaults against Forest Service employees and cited seven different statutes or regulations. Officials said that the data provided to us generally included only the most serious offense that occurred during an incident, due to limitations on linking records in Forest Service’s data system. For example, if both a verbal threat and physical assault occurred during an incident, only the physical assault would be included in the data. Therefore, potential violations of some statutes or regulations that occurred during incidents of threats and assaults may not be recorded in the data. About half of the Forest Service incidents involved potential violations of 36 C.F.R. § 261.3(a), which includes interfering with a forest officer, among other things. Such incidents included: an individual telling a Forest Service employee that his dog would “rip her head off” if she approached his camp; threatening graffiti written on a law enforcement officer’s personal residence; and a death threat to a law enforcement officer. Table 3 provides additional information on threats and assaults against Forest Service employees for fiscal years 2013 through 2017. Park Service. Park Service data for fiscal years 2013 through 2017 included 29 incidents of threats and assaults against Park Service employees and cited six different offense descriptions. According to a Park Service official, some incident records cite a statute or regulation. However, all agency incident records include offense codes that are unique to the Park Service and are associated with the type of violation, such as assault or disorderly conduct. Unlike with statutes and regulations, a perpetrator does not need to be identified for the law enforcement officer to cite an offense code. Three of the six Park Service offense codes relate to assault. Incidents that cited these codes included an individual ramming an employee’s patrol vehicle and a death threat left on an employee’s personal cell phone. Table 4 provides additional information on threats and assaults against Park Service employees for fiscal years 2013 through 2017. FBI. FBI data for fiscal years 2013 through 2017 show that the FBI initiated under 100 domestic terrorism investigations into potential threats to federal land management agencies, and that these investigations most frequently cited eight specific statutes. Investigations can either be initiated by the FBI or referred to the FBI by land management agencies. Land management agency officials said they refer only the most serious incidents to the FBI—such as the armed occupation of Malheur National Wildlife Refuge. The FBI receives information from a variety of sources, including from confidential human sources; public tips; and state, local, tribal, and federal partners. According to FBI officials, an investigation into a domestic terrorism threat may only be initiated if there is information indicating potential violent criminal activity committed in furtherance of ideology. Our analysis of FBI data showed that the majority of the domestic terrorism investigations involved BLM, and the majority involved individuals motivated by anti-government ideologies. Most of the domestic terrorism investigations cited more than one statute or regulation as having been potentially violated, and the severity of the threat varied. For example, some investigations involved written threats and threats conveyed by telephone to government officials. In one example, the investigation involved a subject posting a BLM law enforcement officer’s personal information on Twitter, which resulted in over 500 harassing phone calls and several death threats. Table 5 provides information on the percentage of FBI investigations citing various statutes and regulations related to threats to federal land management agencies for fiscal years 2013 through 2017. incidents of threats. According to officials at all four agencies, employees do not always report threats to agency law enforcement. For example, some field unit employees said that in certain circumstances, they consider receiving threats a normal part of their job. Specifically, field unit employees we interviewed at three land management agencies cited incidents in which they were yelled at, for example, by hunters, permittees, or attendees of public planning meetings. While this behavior may be threatening, some employees told us it was “a part of the job,” and they did not report such incidents. In addition, some officials described being threatened while off-duty, such as by being harassed in local stores or being monitored at their home, which officials said in some cases they did not report because it was a common occurrence. Additionally, according to agency officials, threats are subject to interpretation, so employees may be reluctant to report an incident unless it involves an explicit threat of physical harm or death. During an incident, some threats and assaults may not be recorded in agency data systems by agency law enforcement officers. BLM and Forest Service officials told us that when a single incident involves multiple offenses, the less serious offenses are unlikely to be recorded in the data system. Therefore, the entirety of what occurred during the incident may not be captured in the data system. For example, according to one BLM official we interviewed, if an incident involved a verbal threat and a physical assault, it would likely be recorded into the data system as an assault. there were trucks regularly parked outside their homes, with individuals holding anti- government beliefs, who appeared to be monitoring them and their families. One official stated that “They were holding us hostage in our own homes.” Some incidents are investigated by state or local law enforcement and recorded in their data systems, rather than in land management agencies’ systems. Some incidents of threats and assaults to federal employees may be investigated by state or local law enforcement entities. Specifically, during our site visits, officials from all four land management agencies stated that their employees are instructed to call 911 in the case of an emergency, such as a threat or assault, and that, generally, a local law enforcement officer—such as a county sheriff’s deputy—will respond to the call. Land management agency officials said that when state or local law enforcement respond to an incident, even those that occur on federal lands, the incident would be recorded in those entities’ data systems and may not be entered into the land management agency’s law enforcement data system. Additionally, according to agency officials at all four land management agencies, due to resource constraints, many of their field units do not have any law enforcement officers or have a limited law enforcement presence, which limits the agencies’ ability to respond to and therefore record incidents of threats and assaults. For example, according to agency officials, as of October 2018, 178 of 418 Park Service units had no law enforcement presence. Furthermore, even when field units had dedicated law enforcement officers, the officers might not have been available to immediately respond to incidents, so employees might instead have contacted local law enforcement. Given these reasons, the actual number of incidents of threats and assaults is unclear and may be greater than the number reported and entered in the land management agencies’ law enforcement data systems, according to federal land management agency officials. Land Management Agencies Use Various Approaches to Protect Employees, but Several Factors May Affect Their Ability to Do So Agencies Use Various Approaches to Protect Employees, Including Building Relationships with External Law Enforcement Entities and the Public Federal land management agencies use various approaches to protect their employees from threat and assaults, including building relationships with external law enforcement entities and the public; receiving, collecting, and disseminating intelligence; and offering training to agency employees. Agency officials we interviewed cited four factors that can affect their ability to protect employees, including that employees often work in remote locations. Federal land management agencies use various approaches to protect their employees from threats and assaults. Specifically: Agencies deploy their law enforcement officers to protect employees and resources. All four federal land management agencies have their own law enforcement divisions with law enforcement officers who are tasked with protecting employees and resources in the field. According to agency officials we interviewed, where available, agency law enforcement officers respond to incidents, including threats and assaults against employees. When necessary, agencies also deploy additional law enforcement officers to assist local officers. For example, during the armed occupation of the Malheur National Wildlife Refuge, FWS officials said the agency deployed FWS law enforcement officers from around the country to field units in western states to provide additional security for FWS employees. Similarly, according to BLM documents, BLM officers are sometimes deployed from their home field units for various reasons, such as assisting with large-scale recreational events and supporting fire investigations and natural disaster recovery. Agencies build relationships with local, state, and other federal agency law enforcement entities, as well as the public. Federal land management agencies build relationships with local, state, and other federal agency law enforcement entities to help protect employees and resources in the field and to assist with coordinating law enforcement responses, according to agency officials. These officials said such relationships are important because not all field units have a law enforcement officer, and those that do often rely on local law enforcement for assistance with incidents of threats or assaults against agency employees. For example, officials at one field unit in Nevada stated that during a high-profile court case involving the agency, the Las Vegas Metropolitan Police Department kept a patrol car outside the field unit for several days to help ensure the safety of the field unit’s employees. Agency field officials said that building relationships with the public—both visitors and local citizens—can help keep their employees safe by cultivating trust and reducing potential tension over federal land management practices. For example, officials at one field unit drafted talking points for employees in the event that visitors asked them about a high-profile incident of anti-government behavior directed at a federal land management agency. The talking points outlined the agency’s responsibilities and authorities and, according to agency officials, were aimed at dispelling misunderstandings about federal land management policies. Additionally, officials at several field units we visited stated that their law enforcement officers are focused on educating, rather than policing, visitors. Agencies receive, collect, and disseminate intelligence information. To varying degrees, federal land management agencies receive, collect, and disseminate intelligence information, which helps them anticipate, prepare for, and react to threats against employees and facilities. For example, officials we interviewed from all four agencies said that they receive intelligence information from various sources, including Interior’s Office of Law Enforcement and Security, the Department of Homeland Security, FBI, Federal Protective Service, and Joint Terrorism Task Forces. Additionally, after the armed occupation of Malheur National Wildlife Refuge, FWS created a new risk and threat assessment coordination unit to collect intelligence, inform decision-making, and improve coordination with other Interior bureaus. Agency officials said they disseminate intelligence information about potential threats to their field units so that field personnel can respond appropriately to the threat—including encouraging employees to telework, directing employees to temporarily stop field work, or temporarily closing their field unit. Agencies have developed plans and guidance to promote employee safety. Agency officials have developed a variety of written plans and guidance to promote employee safety. For example, agencies are required to develop occupant emergency plans for most occupied facilities. Occupant emergency plans we obtained covered employee safety, including what to do in the event of a bomb threat or active shooter event. Additionally, some field units developed other documents that outlined actions employees are to take to remain safe, such as plans to address critical incidents or protests at their field unit. Agencies offer various types of safety training. All four federal land management agencies offer a variety of training to help protect employees and promote their safety, according to agency documents and officials. Examples of topics addressed in agencies’ training include understanding anti-government ideologies, communicating techniques for de-escalating conflicts, and responding to an active shooter event. Several Factors Can Affect Land Management Agencies’ Efforts to Protect Their Employees from Threats and Assaults Agency officials cited four factors that can affect agencies’ efforts to protect their employees: Agency employees work with the public and are often easily recognizable. Agency officials said their employees are required to interact with the public as part of their official duties, which can put them at risk of being threatened or assaulted. FWS officials said they temporarily closed field units in an adjacent state during the beginning of the armed occupation of the Malheur National Wildlife Refuge to reduce the likelihood that their employees would interact with members of the public who were traveling to Malheur to participate in the occupation. FWS and Park Service officials stated that their employees are easily recognizable because they typically wear uniforms, which may put them at greater risk of being harassed or threatened by individuals who hold anti-government beliefs. (See figure 5 for examples of uniforms.) In response, on certain occasions, some agency officials direct their employees to wear street clothes instead of their uniforms. Officials we interviewed indicated that whenever they are concerned about a potential safety issue at their field unit, such as a protest, they may encourage eligible employees to telework from home instead of reporting to their work station. Employees often work in remote locations to fulfill agency missions. Agency officials stated that it can be difficult to protect employees because, as part of their field work, employees may be dispersed across hundreds of miles of federal lands and may be located hours or days away from the nearest agency law enforcement officer. (See figure 6 for an example of a remote location.) As a result, some agency officials said they sometimes direct employees to postpone fieldwork if there is a known or anticipated risk of threats or assaults. In addition, according to officials, various field units have developed check-in and check-out procedures to keep track of employees when they are in the field and to help verify that they report back to the office after concluding their fieldwork. Additionally, some field units have purchased satellite communication devices that operate when cell or radio signals are not available, so that employees conducting remote field work can call for help if needed. The number of agency field law enforcement officers has declined. As of the end of fiscal year 2018, the overall number of field law enforcement officers at each of the four land management agencies had declined from fiscal year 2013, which agency officials noted as a factor straining their efforts to protect employees. For example, the Park Service had the lowest decrease of 7 percent, whereas the Forest Service had the greatest decrease of 22 percent. (See table 6.) Figure 7 shows the total number of acres for which federal land management agencies are responsible, the number of field law enforcement officers they had as of the end of fiscal year 2018, and the ratio of officers to acres of federal land. In addition, field officials from the three Interior agencies stated that as a result of various requirements to send law enforcement officers to support border protection efforts, their law enforcement officers are occasionally absent from their field units when deployed 14 days or more to the border. To help address the effects of border deployments, some agency officials told us that they seek opportunities to share law enforcement resources among field units and with other land management agencies and that they typically deploy law enforcement officers from field offices across the agency to minimize the effects on any one unit. Anti-government sentiment can be unpredictable, difficult to respond to, and disruptive. Agency officials we interviewed said that the risk to employee safety posed by individuals holding anti- government sentiments can be unpredictable and that incidents of threats and assaults against employees by such individuals are generally sporadic. For example, BLM, FWS, and Forest Service officials said it would have been difficult to predict that armed individuals would occupy FWS’s Malheur National Wildlife Refuge, since they were protesting BLM actions. BLM and FWS agency officials said they believed that the occupiers chose Malheur National Wildlife Refuge because it was an easier target. In addition, some agency field unit officials told us that incidents of threats and assault from individuals holding anti-government beliefs generally occur when agency personnel are conducting normal operating activities, such as during routine traffic stops or when they are collecting park entrance fees, making them difficult to predict. Officials from one field unit also noted that while their agency wants to ensure employee safety, it is contrary to their mission to close a field unit every time there is a potential anti-government threat—such as threats made on social media. However, during the armed occupation of the Malheur National Wildlife Refuge, refuges in an adjacent state were closed out of caution, and FWS employees turned away visitors who had driven hundreds of miles to view wildlife, according to FWS officials. To help address the potential disruption posed by unpredictable anti- government threats, some agencies and field units developed plans and guidance that prescribed various actions field units and their employees could take to help ensure employees’ safety while also counteracting the disruptive effects of threats and attacks on a facility’s operations. Land Management Agencies Have Not Met Certain Facility Security Assessment Requirements The four federal land management agencies have completed some but not all of the facility security assessments on their occupied federal facilities as required by the ISC Standard and three do not have a plan for doing so. Furthermore, the Forest Service has a facility security assessment methodology that complies with key requirements described in the ISC Standard, but BLM, FWS, and the Park Service do not. The Four Land Management Agencies Have Not Completed All Facility Security Assessments, and Three of the Four Agencies Do Not Have Plans for Doing So The ISC Standard requires that agencies complete facility security assessments on all occupied facilities and suggests that agencies establish annual objectives for conducting assessments. As suggested in the ISC Standard, to do so, agencies may need to consider several things, such as: the number and locations of needed facility security assessments, by establishing which facilities in the agency’s inventory are occupied and grouping them into campuses, if desired; the agency’s organizational structure, to determine entities responsible for conducting the assessments; training needs of entities responsible for conducting the assessments; which facilities or campuses should be prioritized for assessments, if a schedule for completing the assessments, given the agency’s available resources and priorities. The four land management agencies have not completed facility security assessments on all occupied facilities, and agency officials cited various reasons for not doing so. FWS has a plan to complete its assessments, but BLM, the Forest Service, and the Park Service do not. Specifically: FWS. FWS has conducted five facility security assessments on its approximately 465 occupied facilities and has a plan for completing the remaining assessments. According to FWS headquarters officials, FWS employees have limited physical security expertise to conduct facility security assessments; therefore, the agency has developed a plan to meet the ISC Standard’s requirement using contractors. Specifically, in May 2019, FWS hired a project manager to implement a new facility security assessment program and, according to agency documentation, the new program will, among other things, employ contracted assessors to conduct facility security assessments agency-wide. Agency officials said FWS will hire the assessors after the project manager and other agency officials complete preliminary tasks such as developing ISC-compliant policies and procedures, establishing the number and locations of facility security assessments needed, and developing an electronic tracking system for the assessors to use while conducting assessments. Once these tasks are completed—which could take up to 1 year, according to officials—FWS is to develop a schedule for assessors to complete the remaining assessments. BLM. BLM has conducted 21 facility security assessments on its approximately 280 occupied facilities, but officials do not know when they will complete the remaining assessments and do not have a plan to do so. BLM headquarters officials we interviewed said that the agency is decentralized and its state offices are responsible for the security of facilities in their states, including scheduling and conducting facility security assessments. However, some BLM state and field officials we interviewed said they do not have the resources or expertise to conduct the assessments, and BLM does not offer relevant training. In June 2019, the agency issued a hiring announcement for a headquarters-level security manager. According to officials, once hired, the security manager is to establish training for field employees to conduct facility security assessments and monitor state offices’ compliance with the requirement to conduct assessments. Headquarters officials noted that state offices will remain responsible for scheduling and conducting their own assessments. However, as of June 2019, the agency had not developed a plan for how the security manager would implement agency-wide training given available resources, or ensure state offices’ compliance with the requirement to conduct assessments. Forest Service. The Forest Service has conducted at least 135 facility security assessments on its approximately 1,135 occupied facilities, but officials do not know when they will complete the remaining assessments and do not have a plan for doing so. Forest Service headquarters officials we interviewed said that the agency is decentralized and its regional offices are responsible for the security of facilities in their regions, including scheduling and conducting facility security assessments. However, some regional officials we interviewed said they do not have resources or sufficient staff expertise to conduct the assessments. Forest Service headquarters officials stated that they have partnered with the U.S. Department of Agriculture’s Office of Homeland Security to offer facility security assessment training to Forest Service regional employees. Additionally, Forest Service headquarters officials stated that with the assistance of the U.S. Department of Agriculture’s Office of Homeland Security, they were restructuring their physical security program. Under the new structure, headquarters will oversee compliance at a national level and each region will have a team responsible for facility security assessments in their region, which agency officials said will establish lines of authority to account for the agency’s decentralized structure. However, the Forest Service headquarters official responsible for leading this effort said that, due in part to staff turnover, restructuring the physical security program has been difficult. As of June 2019, the Forest Service does not have a documented plan for how the restructured program will operate, how to ensure sufficient staff are trained to complete the assessments given available resources, or how and when regions will complete all of their assessments. Park Service. The Park Service has conducted at least 148 facility security assessments on its approximately 1,505 occupied facilities, but officials do not know when they will complete the remaining assessments and do not have a plan to do so. Park Service headquarters officials we interviewed said that the agency is decentralized and the superintendents of its 418 park units are responsible for the security of facilities within their parks, including scheduling and conducting facility security assessments. However, some park unit officials we interviewed said they do not have the resources or sufficient staff with expertise to conduct the assessments. Park Service headquarters officials stated that they have developed a program to offer facility security assessment training to park employees. In February 2019, according to agency officials, the Park Service hired a security manager who will standardize the agency’s facility security assessment practices, expand facility security assessment training opportunities, and monitor parks’ compliance with the requirement to conduct assessments. Headquarters officials noted that park units will remain responsible for scheduling and conducting their own assessments. However, as of June 2019, the agency had not developed a documented plan for how to ensure sufficient staff are trained to complete the assessments given available resources, or how the security manager would ensure park units’ compliance with the requirement to conduct assessments. Not complying with the ISC Standard’s requirement to complete facility security assessments on all occupied facilities could leave federal agencies exposed to risks in protecting their employees and facilities. Specifically, without conducting all of the required assessments, agencies may not identify the degree to which undesirable events can impact their facilities or identify the countermeasures they could implement to mitigate the risks of those events. Officials from BLM, the Forest Service, and the Park Service acknowledged that completing the remaining facility security assessments is important and that developing an agency-wide plan to do so may help them as they work towards compliance with this ISC Standard requirement. In the process of developing their plans, the agencies could take into consideration their organizational structure, available resources, and training needs, all of which may affect how quickly they can complete their assessments. Furthermore, developing a plan for completing facility security assessments will require agencies to identify the number and locations of their required assessments, which may help them fulfill the fiscal year 2020 ISC compliance reporting requirement. BLM, FWS, and the Park Service Do Not Have Facility Security Assessment Methodologies that Fully Comply with Two Key Requirements in the ISC Standard Three of the four federal land management agencies have not developed a facility security assessment methodology that complies with two key requirements in the ISC Standard. Specifically, according to the ISC Standard, methodologies must, among other things, (1) consider all 33 of the undesirable events identified in the Standard, such as active shooters, vandalism, and explosive devices; and (2) evaluate the three factors of risk—threat, vulnerability, and consequence—for each undesirable event. According to our analysis of agency documentation and interviews with agency officials, the extent to which each agency’s facility security assessment methodology complied with the two key ISC Standard requirements we evaluated varied. As of June 2019, the Forest Service’s facility security assessment methodology met the two key ISC Standard requirements we evaluated, and the Park Service’s methodology partially met the requirements. BLM and FWS did not have established facility security assessment methodologies as of June 2019. Specifically: Forest Service. The Forest Service utilizes an ISC-compliant facility security assessment methodology developed by the U.S. Department of Agriculture. The methodology adheres to the two key ISC Standard requirements that we evaluated. Park Service. The Park Service developed a risk assessment methodology, but it only partially adheres to the two key ISC Standard requirements we evaluated. Specifically, the Park Service’s risk assessment methodology does not include a step to assess the consequences of specific undesirable events, as required by the ISC Standard. Park Service officials indicated the agency’s commitment to conducting facility security assessments using an ISC-compliant methodology and said that they plan to submit the Park Service’s risk assessment methodology to the ISC to be certified as compliant with requirements in the ISC Standard. A Park Service official acknowledged, however, that the agency needs to update its methodology to include a step to assess the consequences of specific undesirable events, and the official stated that the agency does not plan to submit the methodology to the ISC until those changes have been made. As of June 2019, officials did not have a timeframe for doing so. BLM. BLM officials said that, as of June 2019, the agency did not have an established methodology for conducting facility security assessments. Officials told us that, once hired, the new BLM security manager will develop an assessment methodology and that the agency intends to employ a methodology that complies with the ISC Standard. However, BLM officials do not know when the security manager will be hired, and the agency has not documented requirements for the security manager to adhere to the ISC Standard’s requirements. FWS. FWS officials said that, as of June 2019, the agency did not have an established methodology for conducting facility security assessments., Officials told us that the agency intends to employ a methodology that complies with the ISC Standard and provided a high-level description of what they expect the methodology to include. However, this description did not indicate that the agency would evaluate consequences of specific undesirable events, as required by the ISC Standard. According to FWS officials, because staff do not have the expertise to conduct facility security assessments, in 2011, the agency developed physical security survey checklists as an interim solution for assessing facilities. These checklists allowed staff to document the presence or absence of countermeasures identified in the ISC Standard. However, FWS headquarters officials acknowledged that these checklists were not an ISC- compliant risk assessment methodology since they do not consider undesirable events or measure risk, as required by the ISC Standard. By not using a methodology that fully complies with the ISC Standard, agencies could face adverse effects, such as an inability to make informed resource allocation decisions for their physical security needs and providing facilities—and the facilities’ occupants—with an inappropriate or insufficient level of protection. Specifically, according to the ISC Standard, when agencies do not use methodologies that comply with risk assessment requirements in the ISC Standard, facilities may have either less protection than needed, resulting in unmitigated risks, or more protection than needed, resulting in wasted resources. Conclusions To carry out their critical missions to manage the resources on over 700 million acres of federal lands, BLM, FWS, Forest Service, and Park Service officials and facilities are often the most visible and vulnerable representatives of the federal government in remote areas and have been subject to a range of threats and assaults. One way for these agencies to address the safety risks posed by unpredictable anti-government sentiment or other threats is to follow the ISC Standard requirements for conducting facility security assessments. However, BLM, FWS, the Forest Service, and the Park Service have not conducted all required facility security assessments, and BLM, the Forest Service, and the Park Service do not have a plan for doing so. Agency officials stated that this is due, in part, to decentralized organizational structures, limited available resources, and insufficient training. Without a plan for conducting all of the remaining assessments, agencies may not identify the degree to which undesirable events can impact their facilities or identify countermeasures they could implement to mitigate the risks of those events. In addition, as of June 2019, BLM, FWS, and the Park Service do not have facility security assessment methodologies that fully comply with two key requirements in the ISC Standard—namely, to consider the 33 undesirable events identified in the Standard and to evaluate risk factors for each of these events. Without using a methodology that complies with the ISC Standard, the agencies could face adverse effects, including an inability to make informed resource allocation decisions for their physical security needs and providing facilities—and the facilities’ occupants—with an inappropriate or insufficient level of protection. Recommendations for Executive Action We are making a total of six recommendations, including two to BLM, one to FWS, one to the Forest Service, and two to the Park Service. Specifically: The Director of BLM should develop a plan to conduct all required facility security assessments agency-wide, taking into consideration the agency’s organizational structure, available resources, and training needs. (Recommendation 1) The Chief of the Forest Service should develop a plan to conduct all required facility security assessments agency-wide, taking into consideration the agency’s organizational structure, available resources, and training needs. (Recommendation 2) The Director of the Park Service should develop a plan to conduct all required facility security assessments agency-wide, taking into consideration the agency’s organizational structure, available resources, and training needs. (Recommendation 3) The Director of the Park Service should update the agency’s facility security assessment methodology to comply with requirements in the ISC Standard, including a step to consider the consequence of each undesirable event. (Recommendation 4) The Director of BLM should develop a facility security assessment methodology that complies with requirements in the ISC Standard to assess all undesirable events and consider all three factors of risk for each undesirable event. (Recommendation 5) The Director of FWS should develop a facility security assessment methodology that complies with requirements in the ISC Standard to assess all undesirable events and consider all three factors of risk for each undesirable event. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to the Departments of Agriculture, Homeland Security, Interior, and Justice for their review and comment. The Forest Service, responding on behalf of the U.S. Department of Agriculture, generally agreed with the report and our recommendation and cited its efforts to develop a plan to complete required facility security assessments. The Forest Service’s written comments are reproduced in appendix III. Interior, responding on behalf of BLM, FWS, and the Park Service, concurred with our recommendations and provided examples of actions the three agencies planned to take. Specifically, regarding our recommendation that BLM and the Park Service develop a plan to conduct facility security requirements agency-wide, BLM intends to revise its policy and develop such a plan, and the Park Service intends to develop a plan that includes training and tools so that park unit staff can conduct the required assessments. Regarding our recommendation that BLM, FWS, and the Park Service develop methodologies that comply with requirements in the ISC Standard, the agencies cited various efforts to do so, including revising policies and developing new tools, training, and data system modules. Interior’s written comments are reproduced in appendix IV. The Department of Homeland Security provided a technical comment that we incorporated. The Department of Justice told us that they had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Attorney General; and the Secretaries of Agriculture, Homeland Security, and the Interior. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine for the four federal land management agencies, (1) what is known about the number of threats and assaults against their employees, (2) the approaches the agencies used to protect their employees from threats and assaults and any factors affecting their ability to do so, and (3) the extent to which agencies met facility security assessment requirements. For the first objective, we obtained and analyzed data on threats and assaults against land management agency employees from the law enforcement databases of the Forest Service within the U.S. Department of Agriculture and the Bureau of Land Management (BLM), Fish and Wildlife (FWS), and National Park Service (Park Service) within the Department of the Interior for fiscal years 2013 through 2017. These data were the most recent available at the time we began our review. We also obtained and analyzed data from the Federal Bureau of Investigation (FBI) regarding its investigations into potential domestic terror threats to land management agencies. Each land management agency’s law enforcement division records data on threats and assaults to employees, as part of its broader mission to enforce laws that safeguard employees and protect resources. The data systems, however, were not specifically designed for reporting threats and assaults against employees, and they do not include the suspect’s motivation for a crime, such as anti-government extremist ideologies. Since each agency collects and maintains data in a different data system and has agency-specific reporting requirements for incidents, the data differ in how they were originally recorded by field law enforcement officers and how they were queried and reported by headquarters officials responding to our request for data. As such, if data were not entered, or not entered correctly, they would not have been captured in agency queries. According to agency officials at the four land management agencies, they queried their data systems to identify records of incidents that pertained to threats and assaults against employees. BLM, Forest Service, and Park Service officials then conducted record-level reviews and removed records that they determined were not threats or assaults, contained errors, were duplicative, or did not contain sufficient information to make a conclusive determination. We did not systematically review the records they removed. Information about each agency’s data system and limitations related to the agency’s data are as follows: BLM. BLM maintains its data in the Incident Management, Analysis, and Reporting System (IMARS). IMARS is used by most Interior bureaus for incident management and reporting and to prevent, detect, and investigate known and suspected criminal activity. Each bureau uses a different, customized version of IMARS. BLM officials said that beginning in fiscal year 2014, BLM began collecting data on violations of federal statutes, regulations, and state laws during incidents. Prior to that, BLM used a generic description of each offense. Officials also said that when multiple offenses occur during an incident, the less serious offenses are unlikely to be entered into the system. Therefore, some offenses that occurred during incidents of threats and assaults may be excluded from these data. FWS. FWS maintains its data the agency’s Law Enforcement Management Information System (LEMIS). According to FWS documents, LEMIS is used to process and store investigations, intelligence, and other records. FWS officials said the agency changed data systems during our reporting time frame. Specifically, FWS originally stored fiscal year 2013 and 2014 data in the Law Enforcement-Information Management and Gathering System and imported the data into LEMIS in July 2014. We assessed the data across the two systems by comparing incidents per year and types of violations that occurred, and we found that the data were comparable. According to agency officials, they did not review the incidents before providing them to us; therefore, some incidents may not have been actual threats or assaults. Forest Service. The Forest Service maintains its data in the Law Enforcement and Investigations Management Attainment Reporting System (LEIMARS). Forest Service officials said LEIMARS is used to record criminal and claims activity in the national forests, which include verified violations of criminal statutes and agency policy, as well as incidents that may result in civil claims for or against the government. Incidents are recorded in LEIMARS in one of three types of law enforcement report categories: (1) an incident report, which records when an offense occurred but the perpetrator was unknown; (2) a warning notice, which is issued when an offense occurred but the law enforcement officer determined that the offense was inadvertent or committed due to lack of understanding or misinformation; and (3) a violation notice, which is issued for an offense that violates the U.S. Code or Forest Service regulations and the perpetrator was known. We present these three types of reports as incidents. A Forest Service official identified 125 incidents for which the agency could not determine whether a threat or assault to an employee occurred. We excluded these 125 incidents from our analysis. Officials told us that they only provided data on the most serious offense occurring during an incident due to limitations on linking records in the Forest Service’s data system; they also told us that there may be a minor amount of overlap between violation notices and incident reports. Park Service. As with BLM, Park Service data is maintained in the IMARS data system. According to a Park Service official, some but not all Park Service incident records cite a federal statute or regulation. However, all Park Service incident records include offense codes—which are unique to the Park Service—that are associated with the type of violation, such as assault or disorderly conduct. Unlike with the statutes and regulations, a perpetrator does not need to be identified for the law enforcement officer to cite an offense code. Therefore, the Park Service provided data to us by offense code, and we were not able to present the data by the statute or regulation that was potentially violated. We also obtained data from the FBI on investigations into potential domestic terror threats to land management agencies. FBI investigation data is maintained in the Sentinel data system, which is FBI’s case management system. FBI officials provided data from the FBI’s domestic terrorism program on three types of investigations: assessments, preliminary investigations, and full investigations. We reported data on the full investigations because of the limited information available on assessments and preliminary investigations. Before providing the data to us, an FBI official reviewed the record of each domestic terrorism investigation initiated in fiscal years 2013 through 2017 to determine whether the investigation was relevant to threats to BLM, FWS, the Forest Service or the Park Service. These data represent all potential violations known at the time the FBI agent first opened the case and therefore include various potential violations beyond threats and assaults against federal employees. According to agency officials, in some cases, the FBI agent opening the case may not have been able to fully identify all relevant subsections of the statute or regulation that was potentially violated. To account for this, we report FBI’s data at the statute or regulation level. Since we relied on the professional judgement of agency officials to review and interpret incident data, we may be unable to replicate the final data selection drawn from each agency’s database, even if we retrieved the data using the same method and search criteria. We independently assessed the reliability of each agency’s data by (1) reviewing related documentation about the data system; (2) conducting manual reviews of the data for missing data, outliers, and obvious errors; (3) reviewing related internal controls; and (4) interviewing agency officials knowledgeable about the data, among other things. In our interviews, we asked agency officials about data entry practices, data system capabilities and limitations, and circumstances whereby incidents of threats and assaults might not appear in the database. Based on our review, we determined that the data were sufficiently reliable for the purposes of reporting descriptive summary information on the number of threats and assaults against federal land management employees during fiscal years 2013 through 2017. To address our second objective, we examined policies and requirements regarding federal land management agencies’ responsibilities for protecting employees against threats and assaults. We also interviewed headquarters and selected field unit officials about the agencies’ approaches to protecting their employees from threats and assaults and factors that may affect their ability to do so, and we obtained supporting documentation where available. We conducted site visits from March through July 2018 to a nongeneralizable sample of 11 of the 35 regional or state offices and 14 field units across the federal land management agencies. We selected sites in Colorado, Nevada, Oregon, and Utah, since the majority of federal lands are located in the West and some field units in these states had been affected by actions of individuals motivated by anti-government ideologies. Specifically, we conducted site visits to five BLM field units, nine FWS field units, seven Forest Service field units, and four Park Service field units. The number of field units we interviewed varied on several factors, including how many field units regional and state offices invited to the meeting. Findings from the interviews we conducted at our site visits provide useful insights but cannot be generalized to those units we did not include in our review. Based on our site visit interviews, we identified four primary factors affecting agencies’ abilities to protect their employees from threats and assaults. We also collected information from each agency on the number of field law enforcement officers they had at the end of fiscal years 2013 and 2018, the most recent year for which data were available—to analyze any changes in resources. We took steps to assess the reliability of these data, including comparing the data to agency budget justifications and interviewing agency officials, and found them to be sufficiently reliable for the purpose of reporting the number of field law enforcement officers agencies had in fiscal years 2013 and 2018. For the third objective, we examined government-wide requirements promulgated by the Interagency Security Committee (ISC) and documented in ISC’s Risk Management Process for Federal Facilities, which we refer to in this report as the ISC Standard, and its related appendixes. We interviewed ISC officials to learn more about the development of the requirements in the ISC Standard and variations, if any, in how agencies are expected to implement them. To determine whether agencies met the requirement to conduct facility security assessments on all of their occupied facilities, we obtained documents on the agencies’ inventories of occupied facilities and assessed whether the agencies had conducted security assessments on those facilities. We interviewed headquarters and field officials about their inventories and their plans, if any, for completing the remaining assessments. We also examined the extent to which agencies’ facility security risk assessment methodologies complied with two key requirements in the ISC Standard. These included that methodologies must: (1) consider all 33 of the undesirable events identified in the Standard and (2) evaluate the three factors of risk—threat, vulnerability, and consequence—for each undesirable event. We analyzed the agencies’ methodologies and compared them against requirements in the ISC Standard. We also interviewed agency officials about the methodologies. We conducted this performance audit from November 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: The Interagency Security Committee’s 33 Undesirable Events, as of June 2019 Appendix III: Comments from the U.S. Department of Agriculture Appendix IV: Comments from the Department of the Interior Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Anne-Marie Fennell, (202) 512-3841 or fennella@gao.gov. Staff Acknowledgments In addition to the individual named above, Casey L. Brown (Assistant Director), Tanya Doriss (Analyst in Charge), Charles W. Bausell, Charles A. Culverwell, John W. Delicath, Emily E. Eischen, Cindy K. Gilbert, Richard P. Johnson, Vanessa E. Obetz, Dan C. Royer, and Breanna M. Trexler made key contributions to this report.
Why GAO Did This Study A 2014 government report predicted that the rate of violent domestic extremist incidents would increase. In recent years, some high-profile incidents have occurred on federal lands, such as the armed occupation of a FWS wildlife refuge in 2016. Federal land management agencies manage nearly 700 million acres of federal lands and have law enforcement divisions that protect their employees and secure their facilities. GAO was asked to review how land management agencies protect their employees and secure their facilities. For the four federal land management agencies, this report examines, among other things, (1) what is known about the number of threats and assaults against their employees and (2) the extent to which agencies met federal facility security assessment requirements. GAO analyzed available government data on threats and assaults; examined agencies' policies, procedures, and documentation on facility security assessments; compared the agencies' methodologies against ISC requirements; and interviewed land management agency, ISC, and FBI officials. What GAO Found Data from the four federal land management agencies—the Forest Service within the U.S. Department of Agriculture and the Bureau of Land Management (BLM), Fish and Wildlife (FWS), and National Park Service (Park Service) within the Department of the Interior—showed a range of threats and assaults against agency employees in fiscal years 2013 through 2017. For example, incidents ranged from telephone threats to attempted murder against federal land management employees. However, the number of actual threats and assaults is unclear and may be higher than what is captured in available data for various reasons. For example, employees may not always report threats because they consider them a part of the job. Federal Bureau of Investigation (FBI) data for fiscal years 2013 through 2017 also showed that the FBI initiated under 100 domestic terrorism investigations into potential threats against federal land management agencies. The majority of these investigations involved BLM and individuals motivated by anti-government ideologies. The four federal land management agencies have completed some but not all of the facility security assessments on their occupied federal facilities as required by the Interagency Security Committee (ISC). Officials at the four agencies said that either they do not have the resources, expertise, or training to conduct assessments agency-wide. FWS has a plan to complete its assessments, but BLM, the Forest Service, and the Park Service do not. Such a plan could help these agencies address the factors that have affected their ability to complete assessments. The ISC also requires that agencies conduct assessments using a methodology that meets, among other things, two key requirements: (1) consider all of the undesirable events (e.g., arson and vandalism) identified as possible risks to facilities, and (2) assess the threat, vulnerability, and consequence for each of these events. The Forest Service's methodology meets these two requirements and the Park Service's methodology partially meets the requirements, but BLM and FWS have not yet established methodologies for conducting facility security assessments. Without developing a plan for conducting all of the remaining facility security assessments and using a methodology that complies with ISC requirements, agencies may not identify the risks their facilities face or identify the countermeasures—such as security cameras or security gates—they could implement to mitigate those risks. What GAO Recommends GAO is making six recommendations: that BLM, the Forest Service, and the Park Service develop a plan for completing facility security assessments and that BLM, FWS, and the Park Service take action to ensure their facility security assessment methodologies comply with ISC requirements. The agencies generally concurred with the recommendations.
gao_GAO-20-332
gao_GAO-20-332_0
Background OMB’s ERM Requirements and Guidance OMB provides guidance to federal managers on how to improve accountability and effectiveness of federal programs and operations by identifying and managing risks. OMB updated its Circular No. A-123 in July 2016 to establish management’s responsibilities for ERM. As part of the overall governance process, ERM calls for the consideration of a risk across the entire organization and how it may interact with other identified risks. When used appropriately, ERM is a decision-making tool that allows agency leadership to view risks across an organization and helps management understand an organization’s portfolio of top risk exposures, which could affect achievement of the agency’s goals and objectives. In December 2016, we issued a report that provided an overall framework for agencies to build an effective ERM program. In July 2016, OMB also updated Circular No. A-11, Preparation, Submission, and Execution of the Budget. In Circular No. A-11, OMB referred agencies to Circular No. A-123 for requirements related to ERM implementation, including for developing a risk profile as a component of the agency’s annual strategic review. A risk profile is a prioritized inventory of the most significant risks identified and assessed through the risk assessment process. It considers risks from a portfolio perspective, identifies sources of uncertainty that are both positive (opportunities) and negative (threats), and facilitates the review and regular monitoring of risks. Together, these two OMB circulars constitute the ERM policy framework for executive agencies by integrating and operationalizing specific ERM activities and helping to modernize existing risk management efforts. Internal Control Requirements and Guidance Standards for Internal Control in the Federal Government describes internal control as a process put in place by an entity’s oversight body, management, and other personnel that provides reasonable assurance that objectives related to performing operations effectively and efficiently, producing reliable internal and external reports, and complying with applicable laws and regulations will be achieved. Internal control serves as the first line of defense in safeguarding assets. Its importance to federal agencies is further reflected in permanent requirements enacted into law. The internal control processes required by FMFIA and the Standards for Internal Control in the Federal Government help to form an integrated governance structure designed to improve mission delivery, reduce costs, and focus corrective actions toward key risks. OMB Circular No. A-123 precludes agencies from concluding that their internal control is effective if there are one or more material weaknesses identified from its assessment. Air Force’s Annual Statement of Assurance and Financial Audit As a component of DOD, the Air Force is required to (1) identify and manage risks, (2) establish and operate an effective system of internal control, (3) assess and correct control deficiencies, and (4) report on the effectiveness of internal control through an annual Statement of Assurance. In addition, the Chief Financial Officers Act of 1990 (CFO Act), as amended by the Government Management Reform Act of 1994 and implemented by guidance in OMB Bulletin No. 19-03, Audit Requirements for Federal Financial Statements (August 27, 2019), requires the Air Force to annually undergo a financial statement audit. However, since 1990, the Air Force has continued to be unable to demonstrate basic internal control that would allow it to pass a financial statement audit, which has contributed to DOD’s financial management remaining on the GAO High-Risk List since 1995. For fiscal year 2018, the Air Force reported 11 material weaknesses in internal control over operations and 14 material weaknesses in internal control over reporting in its Statement of Assurance. For fiscal year 2019, it reported the same number of operations-related material weaknesses, and its reporting-related material weaknesses increased to 25. During the Air Force’s fiscal years 2018 and 2019 financial statement audits, independent auditors specifically considered the Air Force’s internal control over financial reporting in order to determine appropriate audit procedures to perform in order to express an opinion on the financial statements. The independent auditors disclaimed an opinion on the Air Force’s fiscal years 2018 and 2019 financial statements, stating that the Air Force continued to have unresolved accounting issues, and for each year, the auditors reported 23 material weaknesses in internal control over financial reporting. These material weaknesses included control deficiencies in processes related to the Air Force’s mission-critical assets and involved a lack of policies and procedures, inadequate financial information systems and reporting, and inaccurate and incomplete information in its accountability records and financial reports. Air Force Has Not Fully Integrated ERM into Its Management Practices The Air Force’s efforts to implement ERM are in the early stages, and accordingly, it has not fully incorporated ERM into its management practices. Since the July 2016 update to OMB Circular No. A-123 required agencies to implement ERM, the Air Force has been leveraging and relying on its existing risk management practices. To date, these practices have focused on the organizational unit level and not at the entity level, as required by OMB Circular No. A-123. The Air Force plans to integrate ERM increasingly into its management practices over the next several years, with expectations of a fully developed ERM approach after fiscal year 2023. The Air Force has taken the initial steps to establish an ERM governance structure, define risk classifications, and develop its ERM framework. For instance, the Air Force has drafted charters updating responsibilities for two senior management advisory councils—(1) the Enterprise Productivity Improvement Council (EPIC) and (2) the Executive Steering Committee (ESC)—to implement OMB Circular No. A-123. EPIC will oversee the agency’s risk management function, with a specific emphasis on overseeing the regular assessment of risk and approving risk responses and the Air Force’s risk profile. ESC will lead the implementation, assessment, and documentation of risk management over financial reporting, financial systems, all associated activities, and oversight with respect to the Air Force’s internal control program. EPIC is designed to focus exclusively on potential operational material weaknesses, and ESC will focus on potential financial reporting and financial systems material weaknesses. Air Force officials informed us that both councils would share responsibility for compliance objectives and resulting material weaknesses. During our audit, we analyzed the Air Force’s financial reports beginning with those for fiscal year 1999 and noted that the agency and the external auditors have generally reported material weaknesses each year involving the tracking, reporting, location, accountability, and cost of certain mission-critical assets. These weaknesses identified risks that decreased the Air Force’s ability to perform operations efficiently, prepare reliable financial reports, and comply with applicable laws and regulations. EPIC and ESC currently assess proposed material weaknesses that the primary reporting elements (PRE) submit and determine whether to recommend them to the Secretary of the Air Force for reporting in the annual Statement of Assurance. However, the Air Force’s governance structure does not include a mechanism for EPIC or ESC to oversee the management of risk associated with material weaknesses and consider its effect across the entire agency. Based on our review of the draft charters and documentation from governance meetings, the Air Force included provisions for ESC to identify material weaknesses related to financial reporting and financial systems and EPIC to identify material weaknesses related to operations objectives. However, there were no charter provisions for either council to identify, assess, respond to, and report on the risks associated with those material weaknesses or material weaknesses identified through external audits. A material weakness, reported by either the agency or an external auditor, by definition indicates a significant decrease in an agency’s ability, during the normal course of operations, to achieve objectives and address related risks. Under OMB Circular No. A-123, an agency’s risk management governance structure helps ensure that the agency identifies risks that have the most significant effect on the mission outcomes of the agency. Without a thorough and integrated ERM governance structure that includes oversight responsibilities managing risks associated with material weaknesses in internal control, there is an increased risk that the Air Force will not properly identify, assess, and respond to significant entity-level risks. Air Force Has Not Designed a Comprehensive Approach for Assessing Internal Control, Including Processes Related to Mission-Critical Assets The Air Force’s current internal control assessment process is not designed to facilitate the timely identification and correction of internal control deficiencies or to be used to support the Air Force’s annual Statement of Assurance. Specifically, Air Force management has not designed an adequate process for assessing internal control. Further, the process does not focus on areas with the greatest risk, such as mission- critical assets. In addition, the reviews of mission-critical assets in fiscal years 2018 and 2019 in support of the financial statement audit did not result in adequate assessments of internal control. The Air Force’s policy for assessing the effectiveness of its internal control system and for preparing the agency’s annual Statement of Assurance is based on DOD Instruction 5010.40, Managers’ Internal Control Program Procedures, dated May 2013. The Air Force’s policy is outlined in Air Force Policy Directive 65-2, Managers Internal Control Program. This policy is supported by the procedures outlined in Air Force Instruction (AFI) 65-201, Managers Internal Control Program Procedures, dated February 2016, which the Air Force currently is revising to address the July 2016 OMB Circular No. A-123 update. The Air Force provides additional guidance to supplement AFI 65-201 in its Statement of Assurance Handbook and its Internal Control Playbook. The Air Force’s OMB Circular No. A-123 program comprises 17 designated PREs, including the Secretariat and Air Force staff offices, major commands, the Army and Air Force Exchange Service, and direct- reporting units. The Air Force subdivides each PRE along organizational lines into more than 6,500 organizational assessable units (organizational units), such as a squadron or wing, and other specific programs and functions, where it evaluates internal controls per AFI 65-201. Each of the organizational units has an assessable unit manager (unit manager) who has authority over the unit’s internal control, including continual monitoring, testing, and improvement. Figure 1 illustrates how the Air Force’s organizational structure informs its overall annual Statement of Assurance. The Air Force requires each unit manager to submit an annual supporting statement of assurance providing the manager’s opinion on whether the unit has reasonable assurance that its internal controls are effective. The units submit the statements to the Assistant Secretary of the Air Force, Financial Management and Comptroller (SAF/FM), the office responsible for OMB Circular No. A-123 implementation and compilation of the annual Statement of Assurance. Based on discussions with Air Force officials, SAF/FM uses the unit managers’ supporting statements of assurance to develop the overall Air Force annual Statement of Assurance. Air Force Has Not Designed an Adequate Process for Assessing Internal Control The Air Force’s internal control assessment process does not require (1) an assessment of all required elements of an effective internal control system; (2) test plans that specify the nature, scope, and timing of procedures to conduct; and (3) management validation of results. In addition, existing policies and procedures that staff follow to perform the assessments do not fully implement OMB Circular No. A-123. Further, the Air Force provided inadequate training to those responsible for conducting and concluding on the internal control assessments. Assessment of Internal Control Not Designed to Evaluate All Required Elements Although not required by policy, the Air Force performed its first assessment of the five components of internal control during fiscal year 2019 through an SAF/FM review of entity-level controls, which are controls that have a pervasive effect on an entity’s internal control system and may pertain to multiple components. Based on this assessment, SAF/FM concluded in the Air Force’s Statement of Assurance for fiscal year 2019 that three components of internal control (i.e., risk assessment, control activities, and information and communication) were not designed, implemented, or operating effectively. Although SAF/FM performed this assessment in 2019, the assessment did not include a determination of whether each internal control principle was designed, implemented, and operating effectively. Also, there was no indication that the Air Force designed the assessment of entity-level controls to be pertinent to all Air Force objectives, such as those related to operations, reporting, or compliance. In addition, SAF/FM did not provide the assessment results to the unit managers for input or consideration in their unit-specific control assessments and supporting statements of assurance. The Air Force’s Internal Control Playbook directs unit managers to assess the design and operating effectiveness of the relevant entity-level controls within their purview. However, for fiscal year 2019, SAF/FM performed this assessment, and officials informed us that it was not their intent for unit managers to assess entity-level controls. According to OMB Circular No. A-123, management must summarize its determination of whether each of the five components and 17 principles from Standards for Internal Control in the Federal Government are designed, implemented, and operating effectively and components are operating together in an integrated manner. The determination must be a “yes/no” response. If one or more of the five components are not designed, implemented, and operating effectively, or if they are not operating together in an integrated manner, then an internal control system is ineffective. AFI 65-201 states, as part of its discussion on assessing internal control over financial reporting, that OMB Circular No. A-123 prescribes a process to evaluate controls at the entity level for the five components of internal control (i.e., control environment, risk assessment, control activities, information and communication, and monitoring). The Air Force’s assessment lacked required determinations related to internal control principles because the Air Force lacked policies or procedures for the following: Clearly delineating who within the Air Force (e.g., unit managers or SAF/FM) is responsible for assessing the components and principles of internal control, how often assessments are performed, at what level (e.g., entity or transactional) components and principles are to be evaluated, what objectives are covered in the assessment of entity-level controls, to whom to communicate the results if the results are relevant to others performing assessments of internal control, and what Air Force guidance to follow. Documenting management’s summary, whether performed by the unit managers as outlined in the guidance or by SAF/FM as performed during fiscal year 2019, of its determination of whether each component and principle is designed, implemented, and operating effectively and whether components are operating together in an integrated manner. By not ensuring that management is assessing whether each internal control component and principle is designed, implemented, and operating effectively, the Air Force cannot determine whether internal control is effective at reducing the risk of not achieving its stated mission and objectives to an acceptable level. Moreover, given the entity-wide relevance of SAF/FM’s conclusions, unit managers may not be aware of all the necessary information with which to draw conclusions about the effectiveness of their organizational units’ internal control. Further, management’s assurances on internal control effectiveness, as reported in the Statement of Assurance, may not appropriately represent the effectiveness of the Air Force’s internal control. Assessment of Internal Control Not Designed to Use Consistent Test Plans The Air Force did not have a process in place to base its annual assessment of internal control and Statement of Assurance preparation on uniform testing performed across its agency. Although the Air Force had standard test plans for reviews associated with financial reporting objectives, SAF/FM could not demonstrate what procedures are performed to support its assessment of internal control over its operational, internal reporting, and compliance objectives. Specifically, for these objectives, the Air Force did not develop guidance for those responsible for assessing internal controls on which tests to conduct to obtain the best evidence of whether controls are designed, implemented, and operating effectively; how much testing is needed in each area; when to conduct the tests; how to ensure that current year conclusions are based on current year how assessment procedures are to be adjusted or amended to reflect a consideration of prior year self-identified control deficiencies and internal and external audit results. Additionally, standard test plans for the reviews conducted as part of the Air Force’s financial statement audit remediation efforts did not include guidance on how to consider prior year self-identified control deficiencies and internal and external audit results in determining the nature, timing, and extent of procedures to be conducted for the current year. Further, although the Air Force outlines 20 overall objectives in its 2019 through 2021 Business Operations Plan (dated January 2019), it did not document the specific procedures the Air Force planned and performed to support an evaluation of its internal control over these 20 objectives. According to Standards for Internal Control in the Federal Government, management should establish and operate activities to monitor the internal control system and evaluate the results and should remediate identified internal control deficiencies on a timely basis. For example, as part of its monitoring activities, agency management responsible for the OMB Circular No. A-123 program could design a test plan or establish a baseline to monitor the current state of the internal control system and compare that baseline to the results of its internal control tests. The Air Force’s assessment of internal control and Statement of Assurance are not clearly supported by completed test plans or other documented monitoring activities because SAF/FM does not have a policy or procedures for conducting internal control assessments that require documented test plans that (1) tie back to specific objectives included in the Business Operations Plan; (2) specify the nature, scope, and timing of procedures to conduct under the OMB Circular No. A-123 assessment process; and (3) reflect a consideration of prior year self- identified control deficiencies and results of other internal and external audits. By not ensuring that its more than 6,500 unit managers are evaluating internal control based on the agency’s established baseline, the Air Force cannot ensure that it is consistently and effectively assessing its internal control in order to timely identify and correct deficiencies or that its design of internal control reduces, to an acceptable level, the risk of not achieving agency operational, reporting, and compliance objectives. As a result, Air Force management’s assurances on internal control, as reported in the overall agency Statement of Assurance, may not appropriately represent its internal control effectiveness. Assessment of Internal Control Not Designed to Include Management Validation of Results Air Force management did not have a process to validate whether its unit managers appropriately performed and documented their internal control assessments. During our review, Air Force management was uncertain about how many internal control assessments were being performed or by whom. SAF/FM officials initially stated that there were 5,567 organizational units responsible for assessing internal control, but officials later informed us that the actual number was more than 6,500. Furthermore, Air Force officials were unable to provide information on how many organizational unit managers failed to report on their specific internal control assessments or received waivers from performing such assessments. Finally, management lacked a process to ensure that results used to compile the current year Statement of Assurance are based upon current fiscal year assessments. The Air Force requires unit managers to assess internal control and submit results to SAF/FM through the automated statement of assurance submission system. SAF/FM then compiles the supporting statements of assurance submissions and prepares the Air Force’s annual Statement of Assurance. However, we found that the automated system that collects the annual assessments from more than 6,500 unit managers allows these managers to import internal control testing activities from the prior fiscal year. Air Force officials were unable to provide information about how they ensure that unit managers were not importing prior year results without performing current year testing. OMB Circular No. A-123 requires documentation to demonstrate and support conclusions about the design, implementation, and operating effectiveness of an entity’s internal control system, and requires agencies to consider carefully whether systemic weaknesses exist that adversely affect internal control across organizational or program lines. The Air Force’s process lacks management validation of results because it has not developed a documented policy or procedures to ensure that management can readily review and validate the results of its internal control testing. The Air Force has not required SAF/FM to validate (1) the number of organizational units reporting for its overall internal control assessment; (2) how it tested control procedures, what results it achieved, and how it derived conclusions from those results; and (3) whether it based the results used to compile the current year Statement of Assurance on current fiscal year assessments. Additionally, when PRE management waives assessments, SAF/FM does not have a process to track waivers and assess how they affect the current year assessment of internal control, determination of systemic weaknesses, and compilation of the Air Force’s overall Statement of Assurance. By not validating the internal control assessment results, Air Force management cannot ensure that the assessment was performed as expected to support related conclusions and timely identify internal control deficiencies. Further, management’s assurance on internal control, as reported in the overall Statement of Assurance, may not appropriately represent the internal control effectiveness. Guidance for Assessment of Internal Control Does Not Properly Define Material Weaknesses and Internal Control Air Force guidance for its assessment of internal control neither accurately nor completely reflects definitions included in OMB Circular No. A-123. For example, AFI 65-201 and the Statement of Assurance Handbook provided to unit managers for conducting internal control assessments, and the Internal Control Playbook that the Air Force developed in August 2019 to address internal control over reporting objectives, do not include the complete definitions of the four material weakness categories for deficiencies related to (1) operations, (2) reporting, (3) external financial reporting, and (4) compliance objectives, consistent with guidance in OMB Circular No. A-123. Additionally, the handbook does not define internal control as a process that provides reasonable assurance that objectives will be achieved or an internal control system as a continuous built-in component of operations, affected by people, that provides reasonable assurance that an entity’s objectives will be achieved. Although the playbook does adequately define internal control and a system of internal control, the Air Force developed this guidance after we initiated our review, and the guidance only addresses internal control over reporting objectives and not operational and compliance objectives. These inaccuracies and incomplete descriptions occurred because the Air Force did not provide its internal control assessment guidance preparers or reviewers with training to assist them in writing and reviewing the guidance to ensure proper application of the fundamental concepts of internal control and OMB Circular No. A-123, such as those related to definitions of internal control and material weakness. By not ensuring that Air Force guidance reflects accurate and complete definitions included in OMB Circular No. A-123, the Air Force is at increased risk that its officials performing internal control assessments will not properly conclude on the results; therefore, management’s assurances on internal control, as reported in the Statement of Assurance, may not appropriately represent the effectiveness of internal control. Air Force Lacks Adequate Training for Employees on How to Perform Assessments of Internal Control Among other things, OMB Circular No. A-123 requires staff to identify objectives, assess related risks, document internal controls, evaluate the design of controls, conduct appropriate tests of the operating effectiveness of controls, report on the results of these tests, and appropriately document the assessment procedures. However, the Air Force’s training provided to unit managers responsible for assessing internal control lacks sufficient instructions on how to perform such assessments. Specifically, the current annual training provided by SAF/FM lacks instruction on how to prepare documentation to adequately support conclusions, identify and test the key internal controls, and evaluate and document test results; limits discussion of OMB Circular No. A-123 internal control assessments to internal control over external financial reporting objectives and does not cover internal control over operational, compliance, and internal reporting objectives; lacks adequate definitions of material weaknesses included in OMB Circular No. A-123; lacks instruction on how to interpret, respond to, and correct self- identified deficiencies (control deficiencies, significant deficiencies, and material weaknesses); and is not required for individuals performing reviews related to external financial reporting. SAF/FM officials informed us that the definitions of material weakness and instructions on how to interpret, respond to, and correct deficiencies were included in other guidance documents, such as the newly created Internal Control Playbook. However, the Air Force did not provide the playbook to PREs during the fiscal year 2019 training, and it is not officially named as guidance in the Air Force’s policy for assessments of internal control. Although the Air Force has described the playbook as supplemental guidance, it does not refer to the playbook as such in its policy for assessing the effectiveness of its system of internal control to provide reasonable assurance that operational, reporting, and compliance objectives are achieved. These inadequacies occurred because SAF/FM has not fully evaluated and incorporated the requirements for assessing an internal control system into its training and has not designed training that (1) enhances skills in evaluating an internal control system and documenting the results; (2) reflects all OMB Circular No. A-123 requirements, such as those related to assessing controls for all objectives and determining material weaknesses; and (3) is provided to all who are responsible for performing internal control assessments. According to federal internal control standards, management should demonstrate a commitment to developing competent individuals. For example, management could provide training for employees to develop skills and competencies needed for key roles and responsibilities in assessing internal control. Without appropriate training, those responsible for assessing internal control may not do so adequately enough to identify internal control deficiencies timely and support the agency’s internal control assessments with appropriate documentation and summarization of the results. Air Force Has Not Designed a Process for Assessing Internal Control Based on Risk OMB Circular No. A-123 requires an agency to evaluate whether a system of internal control reduces the risk of not achieving the entity’s objectives using a risk-based assessment approach. However, the Air Force’s current AFI 65-201 approach calls for assessing internal control at more than 6,500 organizational units without regard to quantitative or qualitative risks. As previously discussed, the Air Force lacks procedures to verify whether its unit managers are performing internal control assessments as intended and does not provide guidance for uniform testing across the organization. Therefore, the Air Force’s current approach for assessing internal control does not ensure that areas of greatest risk are addressed, such as mission-critical assets, and instead may unnecessarily focus on areas of lower risk. As a result, the Air Force may not be using resources efficiently. The Air Force’s current design of assessing internal control does not ensure, at a minimum, the evaluation of internal control over areas key to meeting its mission. Specifically, the Air Force does not have a policy requiring evaluation of whether its internal control over processes related to areas of highest risk—such as processes related to mission-critical assets, including equipment, government-furnished equipment, and weapons-system spare parts managed and held by contractors and working capital fund inventory—reduces the risk of not achieving specific operation, reporting, or compliance objectives to an acceptable level. The Acting Secretary of Defense, during fiscal year 2019, emphasized two of these areas—government property in the possession of contractors, which includes government-furnished equipment, and working capital fund inventory—as high priority for corrective actions related to financial statement audit remediation. The Air Force’s current approach for assessing internal control calls for more than 6,500 organizational units to perform assessments without regard to risk because the Air Force has not developed a policy or procedures providing guidance on how to perform the assessment using a risk-based approach. A risk-based approach provides a methodology for Air Force management to focus and prioritize its internal control assessments on areas and activities of greater risk and importance to accomplishing mission and strategic objectives. By not evaluating internal control with a risk-based approach, Air Force management lacks the assurance that resources are used efficiently to assess key controls associated with achieving Air Force objectives subject to the highest risks along with those designated as high priority by agency management, such as controls over accounting for, managing, and reporting on mission-critical assets. Current Reviews Do Not Adequately Assess Internal Control over Processes Related to Mission-Critical Assets Although the Air Force has not designed a process for performing OMB Circular No. A-123 internal control assessments based on risk, it did review certain business process assessable units, such as mission-critical assets, as part of its financial statement audit remediation efforts. However, Air Force’s reviews of internal control over processes related to mission-critical assets did not meet OMB Circular No. A-123 requirements or federal internal control standards for evaluating a system of internal control. During fiscal years 2018 and 2019, the Air Force engaged the Air Force Audit Agency (AFAA) to review control activities for five processes related to mission-critical assets and instructed business process assessable unit leads to conduct additional internal control reviews for select mission-critical asset areas during fiscal year 2019. However, the organizational unit managers did not formally consider the results of these reviews when concluding on their assessments of internal control. For fiscal year 2018, AFAA performed certain agreed-upon procedures to confirm current transactional processes and related internal control over external financial reporting for five mission-critical asset areas as documented in the related business process cycle memorandums. In order to perform the procedures, AFAA used SAF/FM-prepared templates to confirm certain processes and key controls included in the respective process cycle memorandums. However, the procedures SAF/FM instructed AFAA to perform in 2018 did not meet the requirements of an assessment of an internal control system as prescribed in OMB Circular No. A-123. Specifically: Procedures to test design of controls did not include steps for evaluating whether the controls individually or in combination with other controls would achieve objectives or address related risks. Instead, SAF/FM instructed AFAA to confirm whether the process cycle memorandums accurately reflected the controls and processes in place. Procedures to test operating effectiveness of controls were conducted even though there was no determination of whether the controls were designed to achieve objectives or address related risks. Procedures performed involved the use of process cycle memorandums as a baseline, which, as noted by the Air Force’s auditor, did not always reflect the current process, and there was no process in place for management to assess whether the differences related to an inaccurate cycle memorandum or improper implementation of the process. For fiscal year 2019, tests continued to (1) address operating effectiveness without first determining if the controls were designed to meet objectives and reduce risks and (2) involve the use of process cycle memorandums as a baseline that did not always reflect the current business process. For fiscal year 2019, business process assessable unit leads conducted the additional internal control reviews for select processes related to mission-critical assets based on the templates for tests of design and tests of operating effectiveness in Internal Control Playbook appendixes. Similar to the procedures developed for AFAA, the Air Force did not devise the fiscal year 2019 playbook’s template procedures to support conclusions on the design, implementation, and operating effectiveness of internal control over processes that are key to achieving Air Force operational, internal reporting, and compliance objectives. For example, the procedures that the Air Force used to assess the design of internal control over a process related to spare engines at one air base only considered controls related to external financial reporting objectives. The Air Force did not provide evidence that it tested additional controls key to achieving internal reporting, operating, and compliance objectives, such as improving and strengthening business operations and harnessing the power of data for timely decision-making and mission success, or evidence that the Air Force would test such controls during future reviews. Additionally, the Air Force lacked a process for the organizational unit managers or PREs to consider the results of internal control reviews performed at the business process assessable unit level in assessing internal control when they assess and report on the status of internal control for the overall Air Force Statement of Assurance (see fig. 2). Specifically, the current and draft AFI 65-201 and Statement of Assurance Handbook do not include procedures for how information gathered from AFAA agreed-upon procedures or business process unit leads’ testing of internal control over processes related to mission-critical assets is considered in the conclusions reported through the organizational unit managers’ supporting statements of assurance. OMB Circular No. A-123 requires that management, in accordance with federal standards for internal control, evaluate whether a system of internal control reduces the risk of not achieving the entity’s objectives related to operations, reporting, or compliance to an acceptable level. According to the federal internal control standards, when evaluating the design of internal control, management determines if controls individually and in combination with other controls are capable of achieving an objective and addressing related risks. A control cannot be effectively operating if it was not properly designed and implemented. Further, management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. For example, once established, management can use the baseline, or current state of the internal control system, as criteria in evaluating the internal control system and make changes to reduce the difference between the criteria (what is expected) and condition (what Air Force staff did do instead of what was expected). Also, per OMB Circular No. A-123, an agency may document its assessment of internal control using a variety of information sources, such as management reviews conducted expressly for the purpose of assessing internal control (e.g., AFAA agreed-upon procedures and Internal Control Playbook procedures). Air Force reviews of internal control over processes related to mission- critical assets were inadequate because SAF/FM did not include in the agreed-upon procedures or the Internal Control Playbook tests of design to determine if controls individually and in combination with other controls are capable of achieving an objective and addressing related risks, tests of implementation and operating effectiveness only after a favorable assessment of the design of control, and a baseline that has accurate descriptions of business processes and identifies key internal controls as designed by management to respond to risks. Further, SAF/FM did not document its approach for using results from the AFAA agreed-upon procedures in assessing the Air Force’s internal control over processes related to mission-critical assets because the Air Force did not provide guidance establishing the process and reporting lines of all the sources of information that it considered in preparing its overall Statement of Assurance. Also, SAF/FM did not have a documented process for integrating the results of internal control reviews performed at the business process assessable unit level into the organizational units’ assessment of internal control. Moreover, Air Force did not have guidance describing how often, through which conduit, or when the results from the business process internal control reviews were to be provided to relevant organizational units, or how this information would affect conclusions made in a unit’s respective assurance statement. By not comprehensively evaluating internal control over processes related to mission-critical assets, the Air Force is at increased risk that it may not timely identify internal control deficiencies and may lack reasonable assurance over the effectiveness of internal control over processes accounting for mission-critical assets. In addition, without performing internal control assessments in accordance with requirements or having a formal process to consider the results of the AFAA agreed-upon procedures and the Internal Control Playbook procedures in the organizational unit managers’ assessment process, the Air Force increases the risk that its assessment of internal control and related Statement of Assurance may not appropriately represent the effectiveness of internal control. Conclusions Air Force senior leaders work to achieve complex and inherently risky objectives across the agency, while managing over $230 billion in mission-critical assets available to carry out its mission. To reduce the risk of not achieving its objectives or efficiently managing its resources, the Air Force needs to implement an ERM capability that is integrated with an effective system of internal control, as outlined in OMB Circular No. A-123 and federal standards for internal control. Although the Air Force has been working to improve its risk management and internal control practices, including remediation of deficiencies in its internal control over financial reporting related to mission-critical assets, it still faces significant challenges. For example, the agency continues to have difficulties with tracking and reporting, with reasonable accuracy, financial information about its mission-critical assets that directly affect its ability to efficiently support the warfighter, achieve its objectives, and accomplish its mission through reliable, useful, and readily available information. Without an effective ERM governance structure, there is an increased risk that the Air Force will not properly identify, assess, and respond to significant entity-level risks. In addition, by not comprehensively implementing and evaluating its internal control system, the Air Force cannot ensure that it is timely identifying and correcting internal control deficiencies or effectively reducing, to an acceptable level, the risk of not achieving its objectives. Further, Air Force management’s assurances on internal control, as reported in the overall agency Statement of Assurance, may not appropriately represent its internal control effectiveness. Recommendations for Executive Action We are making the following 12 recommendations to the Air Force: The Secretary of the Air Force should develop and implement procedures for an ERM governance structure that includes oversight responsibilities for identifying, assessing, responding to, and reporting on the risks associated with agency material weaknesses from all relevant sources. These procedures should clearly demonstrate that risks associated with material weaknesses are considered by Air Force governance, as a whole, and are mitigated appropriately to achieve goals and objectives. (Recommendation 1) The Secretary of the Air Force should develop policies or procedures for assessing internal control to require (1) clearly delineating who within the Air Force is responsible for evaluating the internal control components and principles, how often they are to perform the evaluation, the level (e.g., entity or transactional) of the evaluation, what objectives are covered in the assessment, to whom to communicate the results if they are relevant to others performing assessments of internal control, and what guidance to follow; (2) documenting management’s determination of whether each component and principle is designed, implemented, and operating effectively; and (3) documenting management’s determination of whether components are operating together in an integrated manner. (Recommendation 2) The Secretary of the Air Force should develop policies or procedures for assessing internal control to require the use of test plans that (1) tie back to specific objectives to be achieved as included in the Business Operations Plan; (2) specify the nature, scope, and timing of procedures to conduct under the OMB Circular No. A-123 assessment process; and (3) reflect a consideration of prior year self-identified control deficiencies and results of internal and external audits. (Recommendation 3) The Secretary of the Air Force should develop policies or procedures for assessing internal control to require SAF/FM to validate (1) the number of organizational units reporting for its overall internal control assessment; (2) how control procedures were tested, what results were achieved, and how conclusions were derived from those results; and (3) whether the results used to compile the current year report are based on current fiscal year’s assessments. (Recommendation 4) The Secretary of the Air Force should develop policies or procedures for assessing internal control to require SAF/FM to assess how waivers affect the current year assessment of internal control, the determination of systemic weaknesses, and the compilation of the Air Force’s overall Statement of Assurance. (Recommendation 5) The Secretary of the Air Force should require that developers of the policy and related guidance associated with designing the procedures for conducting OMB Circular No. A-123 assessments receive recurring training and are appropriately skilled in conducting internal control assessments and are familiar with Standards for Internal Control in the Federal Government. (Recommendation 6) The Secretary of the Air Force should analyze all definitions included in Air Force ERM and internal control assessment policy and related guidance to ensure that all definitions and concepts are defined correctly. (Recommendation 7) The Secretary of the Air Force should require SAF/FM to design recurring training for those who will assess internal control that (1) includes enhancing their skills in evaluating the internal control system and documenting results; (2) reflects all OMB Circular No. A-123 requirements, such as those related to identifying objectives, evaluating deficiencies, and determining material weaknesses; and (3) is provided to all who are responsible for performing internal control assessments. (Recommendation 8) The Secretary of the Air Force should develop policy or procedures consistent with OMB Circular No. A-123 to assess the system of internal control using a risk-based approach. (Recommendation 9) The Secretary of the Air Force should develop procedures to assess internal control over processes related to mission-critical assets, including (1) tests of design that evaluate whether controls are capable of achieving objectives, (2) tests of effectiveness only after a favorable assessment of the design of the control, and (3) a baseline that has accurate descriptions of business processes and identifies key internal controls as designed by management to respond to risks. (Recommendation 10) The Secretary of the Air Force should establish a process and reporting lines of all the sources of information, including reviews performed of internal control processes related to mission-critical assets, that will be considered in the Secretary’s Statement of Assurance. (Recommendation 11) The Secretary of the Air Force should develop procedures to require coordination between business process leads and the Air Force’s unit managers to ensure that mission-critical asset–related internal control deficiencies are considered in the unit managers’ assessments of internal control and related supporting statements of assurance. These procedures should include how, when, and with what frequency the results from the business process internal control reviews should be provided to relevant organizational units for consideration in their respective assurance statements. (Recommendation 12) Agency Comments We provided a draft of this report to the Air Force for review and comment. In written comments, the Air Force concurred with all 12 of our recommendations and cited actions to address them. Air Force’s comments are reproduced in appendix I. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense (Comptroller)/Chief Financial Officer, the Secretary of the Air Force, the Assistant Secretary of the Air Force (Financial Management and Comptroller), and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2989 or kociolekk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of the Air Force Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, John Sawyer (Assistant Director), Russell Brown, Anthony Clark, Oliver Culley, Eric Essig, Patrick Frey, Jason Kelly, Aaron Ruiz, and Vanessa Taja made key contributions to this report.
Why GAO Did This Study OMB Circular No. A-123 requires agencies to provide an annual assurance statement that represents the agency head's informed judgment as to the overall adequacy and effectiveness of internal controls related to operations, reporting, and compliance objectives. Although the Air Force is required annually to assess and report on its control effectiveness and to correct known deficiencies, it has been unable to demonstrate basic internal control, as identified in previous audits, that would allow it to report, with reasonable assurance, the reliability of internal controls, including those designed to account for mission-critical assets. This report, developed in connection with fulfilling GAO's mandate to audit the U.S. government's consolidated financial statements, examines the extent to which the Air Force has incorporated ERM into its management practices and designed a process for assessing internal control, including processes related to mission-critical assets. GAO reviewed Air Force policies and procedures and interviewed Air Force officials on their process for fulfilling ERM and internal control assessments. What GAO Found The Air Force's efforts to implement Enterprise Risk Management (ERM) are in the early stages, and accordingly, it has not fully incorporated ERM into its management practices as outlined in Office of Management and Budget (OMB) Circular No. A-123. As a result, the Air Force is not fully managing its challenges and opportunities from an enterprise-wide view. Until it fully incorporates ERM—planned for some time after 2023—the Air Force will continue to leverage its current governance and reporting structures as well as its existing internal control reviews. The Air Force has not designed a comprehensive process for assessing internal control, including processes related to mission-critical assets. GAO found that existing policies and procedures that Air Force staff follow to perform internal control assessments do not accurately capture the requirements of OMB Circular No. A-123. For example, the Air Force does not require (1) an assessment of each internal control element; (2) test plans that specify the nature, scope, and timing of procedures to conduct; and (3) validation that the results of internal control tests are sufficiently clear and complete to explain how units tested control procedures, what results they achieved, and how they derived conclusions from those results. Also, Air Force guidance and training was not adequate for conducting internal control assessments. In addition, GAO found that the Air Force did not design its assessment of internal control to evaluate all key areas that are critical to meeting its mission objectives as part of its annual Statement of Assurance process. Furthermore, GAO found that procedures the Air Force used to review mission-critical assets did not (1) evaluate whether the control design would serve to achieve objectives or address risks; (2) test operating effectiveness after first determining if controls were adequately designed; (3) use process cycle memorandums that accurately reflected the current business process; and (4) evaluate controls it put in place to achieve operational, internal reporting, and compliance objectives. GAO also found that the results of reviews of mission-critical assets are not formally considered in the Air Force's assessment of internal control. Without performing internal control reviews in accordance with requirements, the Air Force increases the risk that its assessment of internal control and related Statement of Assurance may not appropriately represent the effectiveness of internal control, particularly over processes related to its mission-critical assets. What GAO Recommends GAO is making 12 recommendations to the Air Force, which include improving its risk management practices and internal control assessments. The Air Force agreed with all 12 recommendations and cited actions to address them.
gao_GAO-19-637T
gao_GAO-19-637T_0
Background Overview of Board Directors’ Roles and Responsibilities Our previous work on board diversity describes some of the different roles and responsibilities of corporate and FHLBank boards and their directors. Public Company Corporate Boards Generally, a public company’s board of directors is responsible for managing the business and affairs of the corporation, including representing shareholders and protecting their interests. Corporate boards vary in size. According to a 2018 report that includes board characteristics of large public companies, the average board has about 11 directors. Corporate boards are responsible for overseeing management performance and selecting and overseeing the company’s CEO, among other duties. Directors are compensated for their work. The board generally establishes committees to enhance the effectiveness of its oversight and focus on matters of particular concern, such as an audit committee and a nominating committee to recommend potential directors to the full board. FHLBank Boards Our previous reports on board diversity include a recent report on the FHLBank System. Each of its 11 federally chartered banks has a board of directors and is cooperatively owned by its member institutions, including commercial and community banks, thrifts, credit unions, and insurance companies. Each bank’s board of directors is made up of directors from member institutions and independent directors (who cannot be affiliated with the bank’s member institutions or recipients of loans). As of October 2018, each FHLBank board had 14-24 directors, for a total of 194 directors. The Federal Home Loan Bank Act, as amended by the Housing and Economic Recovery Act of 2008, and its regulations set forth a number of requirements for FHLBank directors, including skills, term length, and the percentage who are member and independent directors. Benefits of Board Diversity Research we reviewed for our prior work cited several benefits associated with board diversity. For example, academic and business research has shown that the broader range of perspectives represented in diverse groups requires individuals to work harder to come to a consensus, which can lead to better decisions. In addition, research has shown that diverse boards make good business sense because they may better reflect a company’s employee and customer base, and can tap into the skills of a broader talent pool. Some research has found that diverse boards that include women may have a positive impact on a company’s financial performance, but other research has not. These mixed results depend, in part, on differences in how financial performance was defined and what methodologies were used. Our Prior Work Found Women and Minorities Were Underrepresented on Boards Our prior work found the number of women on corporate boards and the number of women and minorities on FHLBank boards had increased, but their representation generally continued to lag behind men and whites, respectively. While the data sources, methodologies, and time frames for our analyses were different for each report, the trends were fairly consistent. In our 2015 report, we analyzed companies in the S&P 1500 and found that women’s representation on corporate boards increased steadily from about 8 percent in 1997 to about 16 percent in 2014. However, despite the increase in women’s representation on boards, we estimated that it could still take decades for women to achieve balance with men. When we projected the representation of women on boards into the future assuming that women join boards in equal proportion to men—a proportion more than twice what we had observed—we estimated it could take about 10 years from 2014 for women to comprise 30 percent of board directors and more than 40 years for the number of women directors to match the number of men directors (see fig. 1). Similarly, in our 2019 report on FHLBank board diversity, we found that the share of women board directors increased from 2015 to October 2018 but that women still comprised less than 25 percent of FHLBank board directors as of 2018 (see fig. 2). Our 2019 FHLBank board report also showed an increase in FHLBank directors from 2015 to 2017 for some minority groups, including African- American, Hispanic, and Asian, but they still reflected a small portion of these boards. Further, the size of the increases in minority directors on FHLBank boards was less clear than for women directors due to incomplete data on directors’ race and ethnicity (see fig. 3). Various Factors May Hinder Board Diversity In 2015 and 2019, we identified similar factors that contributed to lower numbers of women and minorities on corporate and FHLBank boards. Notably, stakeholders, board members, and others we interviewed said three key factors generally limited greater board diversity: (1) not prioritizing diversity in recruitment efforts, (2) limitations of the traditional board candidate pipeline, and (3) low turnover of board seats. Not Prioritizing Diversity in Recruitment Efforts In our reports on corporate and FHLBank board diversity, we found that not prioritizing diversity in recruiting efforts was contributing to a lack of women and minority candidates represented on these boards. For example, stakeholders told us board directors frequently relied on their personal networks to identify potential board candidates. Some stakeholders said that given most current board members are men, and peoples’ professional networks often resemble themselves, relying on their own networks is not likely to identify as many women board candidates. In our 2019 report on FHLBank board diversity, stakeholders we interviewed raised similar challenges to prioritizing diversity in recruitment efforts. Some FHLBank representatives said that member institutions—which nominate and/or vote on director candidates—may prioritize other considerations over diversity, such as a candidate’s name recognition. Stakeholders we interviewed for our 2015 report suggested other recruitment challenges that may hinder women’s representation on corporate boards. For example, stakeholders said that boards need to prioritize diversity during the recruiting process because unconscious biases—attitudes and stereotypes that affect our actions and decisions in an unconscious manner—can limit diversity. One stakeholder observed that board directors may have a tendency to seek out individuals who look or sound like they do, further limiting board diversity. In addition, our 2015 report found some indication that board appointments of women slow down once one or two women are on a board. A few stakeholders expressed some concern over boards that might add a woman to appear as though they are interested in board diversity without really making diversity a priority, sometimes referred to as “tokenism.” Limitations of the Traditional Board Candidate Pipeline Our reports on corporate and FHLBank board diversity also identified challenges related to relying on traditional career pipelines to identify potential board candidates—pipelines in which women and minorities are also underrepresented. Our 2015 report found that boards often appoint current or former CEOs to board positions, and that women held less than 5 percent of CEO positions in the S&P 1500 in 2014. One CEO we interviewed said that as long as boards limit their searches for directors to women executives in the traditional pipeline, boards will have a difficult time finding women. Expanding board searches beyond the traditional sources, such as CEOs, could increase qualified candidates to include those in other senior level positions such as chief financial officers, or chief human resources officers. In 2019 we reported that FHLBank board members said they also experienced challenges identifying diverse board candidates within the traditional CEO talent pipeline. Stakeholders we interviewed cited overall low levels of diversity in the financial services sector, for example, as a challenge to improving board diversity. Some bank representatives said the pipeline of eligible women and minority board candidates is small. Several FHLBank directors said the requirements to identify candidates from within corresponding geographic areas may exacerbate challenges to finding diverse, qualified board candidates in certain areas of the country. By statute, candidates for a given FHLBank board must come from member institutions in the geographic area represented by the vacant board seat. Similarly, in 2011 we reported on Federal Reserve Bank directors and found they tended to be senior executives, a subset of management that is also less diverse. Our report also found that diversity varied among Federal Reserve districts, and candidates for specific board vacancies must reside in specific districts. Recruiting board candidates from within specific professional backgrounds or geographic regions is further compounded by competition for talented women and minority board candidates, according to some stakeholders. In 2019, board directors from several FHLBanks described this kind of competition. For example, a director from one bank said his board encouraged a woman to run for a director seat, but the candidate felt she could not because of her existing responsibilities on the boards of two publicly traded companies. We heard of similar competition among Federal Reserve Bank officials in 2011, where organizations were looking to diversify their boards but were competing with private corporations for the same small pipeline of qualified individuals. Low Turnover of Board Seats Each Year The relatively small number of board seats that become available each year also contributes to the slow increase in women’s and minorities’ representation on boards. Several stakeholders we interviewed for our 2015 report on corporate boards cited low board turnover, in large part due to the long tenure of most board directors, as a barrier to increasing women’s representation. In addition, with respect to FHLBank board diversity, Federal Housing Finance Agency staff acknowledged that low turnover and term lengths were challenges. Several stakeholders we interviewed for our 2019 report on FHLBank boards said balancing the need for board diversity with retaining institutional knowledge creates some challenges to increasing diversity. One director said new board directors face a steep learning curve, so it can take some time for board members to be most effective. As a result, the directors at some banks will recruit new directors only after allowing incumbent directors to reach their maximum terms, which can be several years. Potential Strategies for Increasing Board Diversity Just as our 2015 and 2019 reports found similar challenges to increasing the number of women and minorities on corporate and FHLBank boards, they also describe similar strategies to increase board diversity. While the stakeholders, researchers, and officials from organizations knowledgeable about corporate governance and FHLBank board diversity we interviewed generally agreed on the importance of diverse boards and many of the strategies to achieve diversity, many noted that there is no one-size-fits-all solution to increasing diversity on boards, and in some cases highlighted advantages and disadvantages of various strategies. Based on the themes identified in our prior work, strategies for increasing board diversity generally fall into three main categories—making diversity a priority; enlarging the pipeline of potential candidates; and addressing the low rate of turnover (see fig. 4). Making Diversity a Priority Setting voluntary targets. Several strategies we identified in our 2015 report encouraged or incentivized boards to prioritize diversity. These strategies include setting voluntary targets for the number or proportion of women or minorities to have on the board. Many stakeholders we interviewed for our prior work supported boards setting voluntary targets for a specific number or percentage of women and minority candidates rather than externally imposed targets or quotas. Requiring a diverse slate of candidates. Many stakeholders we interviewed supported a requirement by corporate boards that a slate of candidates be diverse. A couple stakeholders specifically suggested that boards should aim for slates that are half women and half men; two other stakeholders said boards should include more than one woman on a slate of candidates so as to avoid tokenism. Tokenism was also a concern for a few of the stakeholders who were not supportive of defining the composition of slates. Filling interim board seats with women or minority candidates. Our 2019 report included strategies for making diversity a priority for FHLBank boards. For example, some FHLBank directors and Federal Housing Finance Agency staff said filling interim board seats with women and minority candidates could increase diversity. By regulation, when a FHLBank director leaves the board mid-term, the directors may elect a replacement for the remainder of his or her term. One director we interviewed said that when a woman or minority director fills an interim term, the likelihood increases that he or she will be elected by the member institutions for a subsequent full term. Emphasizing the importance of diversity and diverse candidates. Our 2015 report found that emphasizing the importance of diversity and diverse candidates was important for promoting board diversity. Almost all of the stakeholders we interviewed indicated that CEOs or investors and shareholders play an important role in promoting diversity on corporate boards. For example, one stakeholder said CEOs can “set the tone at the top” by encouraging boards to prioritize diversity efforts and acknowledging the benefits of diversity. As we reported in 2019, FHLBanks have taken several steps to emphasize the importance of board diversity. For example, all 11 FHLBanks included statements in their 2017 election announcements that encouraged voting member institutions to consider diversity during the board election process. Six of the 11 banks expressly addressed gender, racial, and ethnic diversity in their announcements. In addition, we found that FHLBanks had developed and implemented strategies that target board diversity in general and member directors specifically. For example, the banks created a task force to develop recommendations for advancing board diversity and to enhance collaboration and information sharing across FHLBank boards. Each bank is represented on the task force. Directors we interviewed from all 11 FHLBanks said their banks conducted or planned to conduct diversity training for board directors, which included topics such as unconscious bias. Mentoring women and minority board candidates. In addition, several stakeholders we interviewed about corporate and FHLBank boards noted the importance of CEOs serving as mentors for women and minority candidates and sponsoring them for board seats. For example, conducting mentoring and outreach was included as a strategy in our 2019 report for increasing diversity on FHLBank boards, including current directors pledging to identify and encourage potential women and minority candidates to run for the board. One director we interviewed said he personally contacted qualified diverse candidates and asked them to run. Another director emphasized the importance of outreach by member directors to member institutions to increase diversity on FHLBank boards. Member directors have the most interaction with the leadership of member institutions and can engage and educate them on the importance of nominating and electing diverse directors to FHLBank boards. Improving information on board diversity. As we reported in 2015, several large investors and many stakeholders we interviewed supported improving federal disclosure requirements on board diversity. In addition to increasing transparency, some organization officials and researchers we interviewed said disclosing information on board diversity could cause companies to think about diversity more. While the SEC aims to ensure that companies provide material information to investors that they need to make informed investment and voting decisions, we found information companies disclose on board diversity is not always useful to investors who value this information. SEC leaves it up to companies to define diversity. As a result, there is variation in how much and the type of information companies provide publicly. Some companies choose to define diversity as including characteristics such as relevant knowledge, skills, and experience. Others define diversity as including demographic characteristics such as gender, race, or ethnicity. (See fig. 5) In February 2019, SEC issued new guidance on its diversity disclosure requirements, which aims to clarify the agency’s expectations for what information companies include in their disclosures. Nearly all of the stakeholders we interviewed for our 2015 report said investors also play an important role in promoting diversity on corporate boards. For example, almost all of the board directors and CEOs we interviewed said investors or shareholders can influence board diversity by exerting pressure on the companies they invest in to prioritize diversity when recruiting new directors. One board director we interviewed said boards listen to investors more than anyone else. For example, there have been recent news reports of investor groups voting against all candidates for board positions when the slate of candidates is not diverse. In addition, in 2019 we recommended that the Federal Housing Finance Agency, which has regulatory authority over FHLBanks, review FHLBanks’ data collection processes for demographic information on their boards. By obtaining a better understanding of the different processes FHLBanks use to collect board demographic data, the Federal Housing Finance Agency and the banks could better determine which processes or practices contribute to more complete data. More complete data could ultimately help increase transparency on board diversity and would allow them to effectively analyze data trends over time and demonstrate the banks’ efforts to maintain or increase board diversity. The Federal Housing Finance Agency agreed with this recommendation and said it intends to engage with FHLBanks’ leadership to discuss board data collection issues. The agency also stated that it plans to request that the FHLBank Board Diversity Task Force explore the feasibility and practicability for FHLBanks to adopt processes that can lead to more complete data on board director demographics. Enlarging the Pipeline of Potential Board Candidates Expanding board searches beyond CEOs. Expanding searches for potential board members is yet another strategy for increasing board diversity, as we reported in 2015 and 2019. Almost all the stakeholders we interviewed supported expanding board searches beyond the traditional pipeline of CEO candidates to increase representation of women. Several stakeholders suggested that boards recruit high performing women in other senior-level positions or look to candidates in academia or the nonprofit and government sectors. Our 2015 analysis found that if boards expanded their director searches beyond CEOs to include senior-level managers, more women might be included in the candidate pool. Our 2019 report on FHLBank board diversity also included looking beyond CEOs as a strategy for increasing diversity. For example, we reported that FHLBanks can search for women and minority candidates by looking beyond member bank CEOs. By regulation, member directors can be any officer or director of a member institution, but there is a tendency to favor CEOs for board positions, according to board directors, representatives of corporate governance organizations, and academic researchers we interviewed for the report. Similar to the findings from our 2015 report, the 2019 report found that the likelihood of identifying a woman or minority candidate increases when member institutions look beyond CEOs to other officers, such as chief human resources officers. Several directors of FHLBanks also reported hiring a search firm or consultant to help them identify women and minority candidates, which is a strategy that can be used to enlarge the typical pool of applicants. Addressing the Low Rate of Turnover Adopting term limits or age limits. Several stakeholders discussed adopting term or age limits to address low turnover of board members. Most stakeholders we interviewed for our 2015 report were not in favor of adopting term limits or age limits, and several pointed out trade-offs. For example, one CEO we interviewed said directors with longer tenure often possess invaluable knowledge about a company that newer board directors do not have. Many of the stakeholders who opposed these strategies noted that term and age limits seem arbitrary and could result in the loss of high-performing directors. Expanding board size. Several stakeholders we interviewed supported expanding board size either permanently or temporarily so as to include more women. Some stakeholders noted that expanding board size might make sense when a board is smaller, but expressed concern about challenges associated with managing large boards. Evaluating board performance. Another strategy we identified in our 2015 report to potentially help address low board turnover and in turn increase board diversity was conducting board evaluations. Many stakeholders we interviewed generally agreed it is good practice to conduct evaluations of the full board or of individual directors, or to use a skills matrix to identify skills gaps. However, a few thought evaluation processes could be more robust. Others said that board dynamics and culture can make it difficult to use evaluations as a tool to increase turnover by removing under-performing directors from boards. Several stakeholders we interviewed discussed how it is important for boards to identify skills gaps and strategically address them when a board vacancy occurs, and one stakeholder said identifying such gaps could help boards think more proactively about finding diverse candidates. The National Association of Corporate Directors has encouraged boards to use evaluations not only as a tool for assessing board director performance, but also as a means to assess board composition and gaps in skill sets. Chairwoman Waters, Ranking Member McHenry, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Chelsa Gurkin, Acting Director of Education, Workforce, and Income Security, at (202) 512-7215 or GurkinC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Betty Ward-Zukerman (Assistant Director), Meredith Moore (Analyst-in-Charge), Ellie Klein, and Chris Woika. In addition, key support was provided by Susan Aschoff, James Bennett, Ben Bolitzer, Ted Burik, Michael Erb, Daniel Garcia-Diaz, Monika Gomez, Kay Kuhlman, Sheila McCoy, Anna Maria Ortiz, James Rebbe, Karen Tremba, and Walter Vance. Enclosure I: Related GAO Products Financial Services Industry: Representation of Minorities and Women in Management and Practices to Promote Diversity, 2007-2015. GAO-19-398T. Washington, D.C.: February 27, 2019. Federal Home Loan Banks: Steps Have Been Taken to Promote Board Diversity, but Challenges Remain. GAO-19-252. Washington, D.C.: February 14, 2019. Diversity in the Technology Sector: Federal Agencies Could Improve Oversight of Equal Employment Opportunity Requirements. GAO-18-69. Washington, D.C.: November 16, 2017. Financial Services Industry: Trends in Management Representation of Minorities and Women and Diversity Practices, 2007–2015. GAO-18-64. Washington, D.C.: November 8, 2017. Corporate Boards: Strategies to Address Representation of Women Include Federal Disclosure Requirements. GAO-16-30. Washington, D.C.: December 3, 2015. Federal Home Loan Banks: Information on Governance Changes, Board Diversity, and Community Lending. GAO-15-435. Washington, D.C.: May 12, 2015. Diversity Management: Trends and Practices in the Financial Services Industry and Agencies after the Recent Financial Crisis. GAO-13-238. Washington, D.C.: April 16, 2013. Federal Reserve Bank Governance: Opportunities Exist to Broaden Director Recruitment Efforts and Increase Transparency. GAO-12-18. Washington, D.C.: October 19, 2011. Women in Management: Female Managers’ Representation, Characteristics, and Pay. GAO-10-1064T. Washington, D.C.: September 28, 2010. Financial Services Industry: Overall Trends in Management-Level Diversity and Diversity Initiatives, 1993–2008. GAO-10-736T. Washington, D.C.: May 12, 2010. Financial Services Industry: Overall Trends in Management-Level Diversity and Diversity Initiatives, 1993–2004. GAO-06-617. Washington, D.C.: June 1, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Corporate boards take actions and make decisions that not only affect the lives of millions of employees and consumers, but also influence the policies and practices of the global marketplace. Many organizations and businesses have recognized the importance of recruiting and retaining women and minorities for key positions to improve performance and better meet the needs of a diverse customer base. Academic researchers and others have highlighted how diversity among board directors increases the range of perspectives for better decision making, among other benefits. Prior GAO reports have found challenges to increasing diversity on boards and underscored the need to identify strategies that can improve or accelerate efforts to boost representation of women and minorities. These include reports examining the diversity of publicly-traded company boards and the boards of federally chartered banks, such as the FHLBanks. This statement is based on two GAO reports, issued in December 2015 and February 2019, on the representation of women on corporate boards and the representation of women and minorities on the boards of FHLBanks, respectively. Information about the scope and methodologies used can be found in the original reports. This statement focuses on (1) the extent of diversity on such boards (2) factors that hinder diversity on these boards, and (3) strategies to promote board diversity on corporate and FHLBank boards. What GAO Found Prior GAO reports found limited diversity on both publicly-traded company boards (corporate boards) of directors and Federal Home Loan Bank (FHLBank) boards. For example, GAO's 2019 report on FHLBank boards found women's board representation was at 23 percent in 2018; in 2015 it had been 18 percent. In a 2015 report on corporate boards, GAO projected the representation of women into the future—assuming that women join boards in equal proportion to men—and estimated it could take more than 40 years for the number of women directors to match the number of men directors. GAO's report on FHLBank boards also showed an increase in FHLBank directors from some minority groups, including African-American, Hispanic, and Asian since 2015, but they still reflected a small portion of these boards. The size of the increases in minority directors on FHLBank boards was less clear than for women directors due to incomplete board member demographic data. Similar factors may limit corporate and FHLBank boards' efforts to increase diversity, according to stakeholders, board members, and others GAO interviewed. These factors include not prioritizing diversity in board recruitment efforts, limitations of the traditional board candidate pipeline, and low turnover of board seats. GAO identified a number of strategies for increasing the representation of women and minorities on corporate and FHLBank boards based on a review of relevant literature and discussions with researchers and corporate and government officials (see figure).
gao_GAO-19-299
gao_GAO-19-299_0
Background Signed into law on May 9, 2014, the DATA Act required OMB, or an agency it designated, to establish a pilot program to facilitate the development of recommendations for (1) standardized reporting elements across the federal government, (2) elimination of unnecessary duplication in financial reporting, and (3) reduction of compliance costs for recipients of federal awards. To meet these requirements, OMB established a pilot program with two components—one that focused on federal grants and another on federal contracts (procurement). OMB designated HHS as the executing agency of the grants portion of the Section 5 Pilot with oversight from OFFM. OFPP was responsible for designing and leading the procurement portion of the pilot focusing on reporting of Federal Acquisition Regulation (FAR) procurement requirements. OFPP collaborated with the Chief Acquisitions Officers’ Council and GSA on specific aspects of implementation including the development of the Central Reporting Portal, a reporting tool which is intended to centralize FAR reporting. See figure 1 for a timeline of the activities undertaken by the grants and procurement portions of the pilot as well as deadlines required by the act. As part of our ongoing oversight of the DATA Act’s implementation, we have monitored OMB’s efforts to meet its statutory requirements related to the Section 5 Pilot. In April 2016, we reported on the design plans for the Section 5 Pilot. We found that HHS’s design for the grants portion of the pilot was generally on track to meet statutory requirements and partially adhered to leading pilot design practices. However, we also reported that the procurement portion was not on track to meet requirements, and that its plans did not follow leading pilot design practices. In response to a recommendation in our report, OMB revised its plan for the procurement portion to better reflect leading practices for pilot design identified in our April 2016 report. These changes included more fully documenting its data collection plans and including a sampling plan to meet diversity requirements for pilot participants. According to OMB staff, the ongoing work and related grants guidance resulting from the Section 5 Pilot reflects a broader strategy for reducing federal recipient reporting burden that is outlined in the President’s Management Agenda (PMA). Released in March of 2018, and led by the Executive Office of the President and the President’s Management Council, PMA is a strategy to modernize how federal agencies deliver mission outcomes and provide services in three key areas: (1) modern information technology; (2) data, accountability, and transparency; and (3) the workforce for the 21st Century. Several Cross-Agency Priority (CAP) goals include PMA’s milestones and activities. These CAP goals identify opportunities for multiple agencies to collaborate on government-wide efforts and report on goal progress quarterly. Two of these, CAP Goals 5 and 8, include strategies for reducing federal award recipient reporting burden. OMB staff told us that some of the findings from the Section 5 Pilot and recommendations from their subsequent report to Congress informed the focus of these CAP goals. For example, according to OMB staff, the grants portion of the Section 5 Pilot focused on identifying how changes in grants data collection and grant management may reduce federal recipient reporting burden. PMA CAP Goal 8 is described as building on these efforts by shifting the focus toward the life cycle of grants management and standardizing grants management activities using agile technology. Section 5 Pilot Met Many but Not All Statutory Requirements We determined that the Section 5 Pilot fully met three of the DATA Act’s statutory requirements, substantively met one, and partially met two others. The Section 5 Pilot fully met the following statutory requirements: (1) that pilot data collection cover a 12-month reporting cycle; (2) timely issuance of OMB’s report to Congress in August of 2017 to select congressional committees; and (3) that the report to Congress contain a discussion of any needed legislative actions as well as recommendations related to automating and streamlining aspects of federal financial reporting to reduce the reporting burden of federal award recipients. We found that the pilot also substantively met the requirement that the pilot program include a combination of federal award recipients and an aggregate value of awards of not less than $1 billion but not more than $2 billion. Although the $122 billion in grants included in the pilot greatly exceeded the upper bound, this was principally a result of the decisions by OFFM and HHS to pilot different test models for reducing reporting burden, and to include a wide range of different types of grants. The total value of grant awards exceeded the amount envisioned by the act. OMB’s August 2017 report stated that the decision to go beyond the minimum requirement of testing one approach was made in the interest of achieving the DATA Act’s objective to identify ways to reduce reporting burden as well as the effect this decision would have on the aggregate value of grants sampled. We believe that the pilot substantively met this requirement and did not identify any negative effects related to the larger aggregate value of grants, contracts, and subawards included in the grants portion of the pilot. We found that the approach followed by OMB and HHS furthered the broader objective identified by this section of the act. In addition, we determined that the pilot partially met two of the act’s requirements. The first of these requirements concerns the act’s requirement that OMB’s report to Congress include a description of the data collected, the usefulness of the data provided, and the cost to collect pilot data from participants. The report that OMB issued to Congress in August 2017 included information on the first two of these but only partly addressed the third. Specifically, it contained cost information for only the grants portion of the pilot, stating that the cost associated with executing this portion during fiscal years 2015 through 2017 was more than $5.5 million. The report did not contain any cost information on the procurement portion of the pilot. The DATA Act also required that OMB issue guidance to agencies for reducing reporting burden for federal award recipients—including both grantees and contractors—but the guidance subsequently issued only pertained to the grants community. We determined that OMB only partially met this requirement. On September 5, 2018, OMB issued M-18- 24: Strategies to Reduce Grant Recipient Reporting Burden. Among other things, this memorandum contained guidance to federal agencies making the SF-424B form optional based on findings from the grants portion of the pilot. Form SF-424B is used by grantees to document assurances regarding their compliance with a wide range of rules and regulations. Figure 2 summarizes our assessment. The Grants Portion of the Pilot Identified Several Ways to Reduce Reporting Burden and Provided Support for Government-Wide Streamlining Efforts All Six Grant Test Models Reported Evidence of Reducing Burden, Increasing Accuracy, or Both As the agency designated by OMB to execute the grants portion of the Section 5 Pilot, HHS developed and analyzed six “test models” to determine if adopting the proposed changes would contribute to the pilot program’s objectives of reducing reporting burden and duplication. These test models examined a variety of grant reporting issues that HHS had identified as presenting challenges. All but one of the test models, the Common Data Element Repository (CDER) Library 2, based their findings on data collected from grantees. The text box below provides high-level summaries of each of the six models. Additional details on the approach followed for each model, as well as reported results, can be found in appendix II. OMB Used Findings from the Grants Portion of the Pilot to Support Recommendations and Government-wide Guidance for Reducing Grantee Reporting Burden OMB’s August 2017 report to Congress on the findings of the Section 5 Pilot contained three broad recommendations and stated that OMB plans to take action on these recommendations. These recommendations covered (1) standardizing core data elements, (2) eliminating duplication through auto-population of data, and (3) leveraging information technology open data standards to develop new tools across the federal government. We found that evidence from the grant test models supported all three recommendations for streamlining federal reporting discussed in the report. For example, OMB recommended that its staff standardize core data elements used for managing federal financial assistance awards based on reductions in administrative burden experienced in the CDER Library 1 test model. In another example, four test models supported OMB’s recommendation for increased use of data auto-population from existing federal data sources as a way to reduce duplication in reporting. Findings from the grants portion of the Section 5 Pilot also provided support for government-wide efforts to streamline reporting and reduce recipient reporting burden. These include OMB’s memorandum M-18-24: Strategies to Reduce Grant Recipient Reporting Burden, which discusses efforts to automate and centralize grant management processes. Among other things, M-18-24 requires that federal agencies evaluate the systems and methods currently used to collect information from grant recipients to eliminate duplicative data requests. OMB staff confirmed that M-18-24 incorporates findings from some of the test models of the grants portion of the pilot such as the Single Audit test model, which examined reducing duplicative reporting of grant recipients’ data. The efforts to reduce duplicative reporting in M-18-24 also align with OMB’s recommendation in its August 2017 report to Congress to eliminate unnecessary duplication in reporting by leveraging information technology that can auto-populate from existing data sources. In addition, OMB staff told us that findings from the grants portion of the pilot contributed to broader, government-wide initiatives related to federal reporting. For example, according to OMB staff, the three recommendations from the August 2017 report to Congress are reflected in CAP Goal 8 of the President’s Management Agenda, which focuses on results-oriented accountability for grants. These OMB staff also told us that findings from the grants portion of the pilot informed two CAP Goal 8 strategies. For example, the CAP Goal 8 grants management strategy focuses on standardizing grants management business processes and data. OMB developed a comprehensive taxonomy for core grants management data standards that is currently available for public comment. In addition, a second strategy focuses on incorporating a risk- based performance management approach to metrics in grant award operations to determine low-risk and high-value federal awards. CAP Goal 8 also states plans to streamline the 2019 Single Audit Compliance Supplement to focus on requirements that inform grant award performance. Procurement Portion of Pilot Did Not Result in Sufficient or Appropriate Data to Assess Changes in Contractors’ Burden Reduction Lack of Contractor Participation and the Absence of Iterative and Ongoing Stakeholder Engagement Limited the Ability of Procurement Pilot to Achieve its Objectives Unlike the grants portion of the pilot, the procurement portion did not result in data collection that could be used for an evidence-based assessment of ways to reduce reporting burden. OMB’s Office of Federal Procurement Policy (OFPP) sought to assess five test models that, according to the report to Congress, were essential to centralized procurement reporting. However, the pilot did not fully test any of the hypotheses associated with those test models. The reasons for not testing the hypotheses included a lack of contractor participation and a lack of iterative and ongoing stakeholder participation and engagement throughout the course of the pilot. See appendix III for additional information regarding the various procurement test models, associated hypotheses, and additional details regarding our assessment. The procurement portion of the pilot focused entirely on the development and testing of a central reporting portal to consolidate FAR reporting requirements. According to OFPP staff, the pilot intended to eventually identify ways to centralize a wide range of reporting requirements that contractors currently meet through decentralized methods. Contractors must report many types of information depending on the contract. Toward that end, OFPP, with the assistance of GSA, created a procurement reporting website called the Central Reporting Portal. To test the efficacy of this portal for reducing burden, OFPP initially decided to examine how well it handled a specific FAR reporting requirement—the reporting of payroll data in accordance with the Davis-Bacon Act. According to pilot plans, Davis-Bacon reporting requirements were selected because they were identified by contractors as “pain points” during initial stakeholder outreach conducted in 2014 and 2015. OFPP planned to collect and analyze 1 year of weekly Davis-Bacon wage reporting data from at least 180 contractors through the Central Reporting Portal to identify how centralized reporting might reduce contractor reporting burden. However, during the 12-month procurement data collection period, no contractors agreed to submit their Davis-Bacon data as part of the pilot. Consequently, OFPP did not collect any wage data. Despite OFPP stating in its plans and reiterating to us as late as September 2017 that it expected to be able to secure at least 180 pilot participants, only one contractor expressed interest in reporting its Davis-Bacon information using the portal. This contractor withdrew from the pilot before submitting any data through the Central Reporting Portal. OFPP staff told us they were aware of the potential for low pilot participation for Davis- Bacon reporting when pilot testing began in February 2017 because contractors already had established processes for fulfilling the highly complex Davis-Bacon reporting requirements, and pilot participation was optional. According to GSA contracting staff, the one contractor who initially expressed interest ultimately decided not to participate because the format in which the contractor tracked and reported payroll data was incompatible with that used by the pilot portal, resulting in additional burden. However, it was not until August 2017—approximately 7 months into its year-long data collection period—that specific steps were taken to address the fact that the procurement portion of the pilot had not collected any data from Davis Bacon contractors. During this period OFPP did not conduct pilot outreach activities with the contractors, who were key to successful implementation of the pilot. OFPP staff told us that at the time of the pilot launch they learned that contractors were interested in having the Central Reporting Portal be able to communicate with third-party payroll reporting systems to automate reporting. OFPP staff said that although they are exploring this possibility, it was not a capability that was included as part of the pilot. Had this type of feedback on stakeholder needs been obtained sooner, OMB could have explored the feasibility of adding this capability to the portal or engaged in communication with stakeholders to develop alternate approaches that might have persuaded more contractors to participate. The usefulness of iterative and ongoing communication is recognized by the Standards for Internal Control in the Federal Government. Those standards state that management should use quality information to achieve its objectives, and that management should collect quality information by engaging with stakeholders through iterative and ongoing processes and in a timely manner. In this case, key stakeholders include relevant agencies, contracting officials, and contractors using the system. OFPP’s plan for the procurement portion of the pilot recognized the importance of stakeholder engagement stating that, to include a diverse group of recipients in the pilot, they should identify eligible participants for the pilot, conduct outreach to identify participants, and repeat this process as necessary until they achieved the sample necessary to test the Central Reporting Portal. However, as previously stated, no contractors agreed to submit their Davis-Bacon data as part of the pilot. Therefore, OFPP did not repeat this process until the pilot obtained the necessary sample size. Such interactions could have provided important information on contractors’ needs and concerns that OFPP could have used to inform their decisions regarding the pilot’s implementation. Expansion of Procurement Pilot to Include Hydrofluorocarbon Reporting Had Limitations In November 2017, OFPP expanded the type of data accepted by the pilot to include hydrofluorocarbon (HFC) reporting, a new FAR reporting requirement. However, this choice had limitations in its suitability for providing useful data for testing the hypotheses of the five procurement test models. Unlike Davis-Bacon reporting, where contractors submit weekly reports, HFC is an annual reporting requirement for contractors that emit HFC gases over a certain threshold. The Central Reporting Portal is the only location where contractors can submit HFC reporting. For the purposes of the pilot, the Central Reporting Portal accepted HFC submissions from November 2017 through February 2018. During the pilot, 11 HFC annual reports were submitted to the portal (see figure 3). As a result of the small number of reports collected, OMB collected much less data than it had initially expected to receive to test the capabilities of the Central Reporting Portal. If the procurement portion of the pilot had been executed as planned, it could have theoretically resulted in 9,360 Davis-Bacon submissions for analysis. A larger data set of contractors’ experiences using the Central Reporting Portal could have informed OMB’s decision-making process through analysis of more, and potentially more varied data. In addition to the small number of submitted HFC annual reports, the decision to switch to using HFC data had another limitation. These data could not be used to examine changes in reporting burden as a result of using the Central Reporting Portal. This is because HFC reporting was a new reporting requirement, and as such, it did not have an established reporting process to use as a point of comparison to assess changes in reporting burden. The objective of the procurement pilot was to assess how centralized reporting can reduce reporting burden. This objective could not be achieved without data on the existing reporting burden. OMB’s Recommendations for Streamlining Reporting Were Not Supported by Findings from the Procurement Portion of the Pilot Evidence from the procurement portion of the pilot did not support OMB’s government-wide recommendations for reducing reporting burden in its August 2017 report to Congress. As previously stated, OMB’s report to Congress included three recommendations that focused on (1) standardizing core data elements, (2) eliminating duplication by using data auto-population, and (3) leveraging information technology open standards to develop new tools. As support for the first recommendation, the report stated that results from the procurement pilot test models demonstrated that standard data elements—coupled with uniform data adoption—and the ability to centrally collect and share information reduces administrative burden. Since the procurement portion of the pilot did not gather or analyze any pilot data from the Davis-Bacon participants, OMB did not assess the extent to which the ability to centrally collect data actually reduces burden. Recommendation two stated that support from the procurement test model demonstrated that recipient burden is reduced when identical data can be entered once in one place and reused. However, the HFC data collection process did not reuse data when capturing information and did not have the ability to auto-populate data. HFC data collection was the only part of the procurement portion of the pilot that collected information that could have been used to inform this recommendation. According to OFPP staff, the Davis-Bacon portion of the portal had the capability to auto-populate data. However, no Davis-Bacon data were collected that would have allowed quantification of the effects of reusing data on reporting burden. OMB stated that support for the third recommendation included data and information collected from the pilot. Although there was some consultation with stakeholders during initial planning and design of the procurement portion of the pilot and the early development of the portal, the pilot did not actually collect any data from either Davis-Bacon contractors or through the HFC portion of the pilot in the data gathering and analysis portion of the pilot related to this recommendation. OMB Plans to Expand Use of the Central Reporting Portal to Streamline Reporting of FAR Requirements In August 2018, OMB announced plans to expand the use of the Central Reporting Portal for FAR reporting, stating that the portal allows contractors to report data to one central location. OFPP staff told us that they are considering centralizing a third FAR requirement using the portal in the future but have not yet determined what that will be. As discussed above, the procurement portion of the pilot did not collect sufficient data to test the effect of the portal on reporting burden. In addition, the plan for the procurement portion states that OFPP intended to analyze feedback on pilot data collection and, depending on that feedback, decide whether to expand the pilot to other FAR reporting requirements. However, the pilot did not collect any such feedback to inform its determination to expand the Central Reporting Portal in the future. As a result, OFPP has limited information regarding issues that could affect expanded use of the Centralized Reporting Portal. In the absence of such information, it is difficult for OFPP to determine whether continued or expanded use of the Central Reporting Portal will reduce reporting burden, and which additional FAR requirements, if any, to include. Conclusions To reduce the burden and cost of reporting for recipients of federal funds, Congress included specific provisions in the DATA Act to encourage OMB to take a deliberate and evidence-based approach toward developing guidance for federal agencies in this area. The Section 5 Pilot offered OMB a valuable opportunity—namely, to test a variety of methods and techniques at a small scale before applying them more widely. Such a process may enhance the quality, credibility, and usefulness of evaluations in addition to helping to ensure that time and resources are used more effectively. Similar to what we found when we analyzed the design of the Section 5 Pilot in 2016, our review of its implementation and the results it produced found differences between the grant and procurement portions. OMB and HHS designed and executed a robust grants portion of the pilot that tested several different approaches for reducing the reporting burden experienced by federal grant recipients. The resulting findings were used to develop OMB’s government-wide recommendations, and to inform two subsequent goals in the 2018 President’s Management Agenda related to reducing recipient reporting burden. In contrast, OMB did not fully implement the procurement portion of the pilot consistent with its plans. The procurement portion did not collect data to test the hypotheses associated with any of its five test models, and therefore could not provide empirical support for either OMB’s government-wide recommendations or guidance related to reducing reporting burden. Among the factors responsible for this were the lack of Davis-Bacon contractor participation and OMB’s inability to find a suitable alternative. OMB has announced its intention to expand centralized reporting for FAR requirements across government. In the absence of timely information regarding the needs and concerns of stakeholders, OMB faces the risk of experiencing implementation challenges similar to those it experienced during the pilot. Although the use of a centralized reporting portal could ultimately prove useful for reducing burden, the lack of information from stakeholders—including the contractors who would use it—raises concerns about the future success of plans for expanding the Central Reporting Portal. Recommendation for Executive Action The Director of OMB should ensure that information is collected regarding how centralized reporting of procurement requirements might reduce recipient reporting burden—including input from stakeholders such as contractors through an iterative and ongoing process—to inform OMB’s planned expansion of the Central Reporting Portal. Agency Comments and Our Evaluation We provided a draft of this report to OMB, HHS, and GSA for review and comment. HHS and GSA informed us that they had no comments. OMB provided technical comments, which we incorporated as appropriate. OMB neither agreed nor disagreed with our recommendation. We are sending copies of this report to the appropriate congressional committees, The Secretary of Health and Human Services, The Acting Director of OMB, the Administrator of GSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-6806 or sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report assesses the extent to which (1) the Section 5 Pilot met the statutory requirements of the act, (2) the grants portion of the Section 5 Pilot demonstrated changes in federal award recipients’ reporting burden, and (3) the procurement portion of the Section 5 Pilot demonstrated changes in federal award recipients’ reporting burden. To assess the extent to which the pilot met statutory requirements we reviewed section 5 of the Federal Funding Accountability and Transparency Act of 2006, as amended by the Digital Accountability and Transparency Act of 2014, to determine the legal requirements set forth in the act pertaining to establishing, designing, and executing the Section 5 Pilot. We compared these requirements to documents from the Office of Management and Budget (OMB) and designated agencies. These documents included pilot plans for the grants and procurement portions of the pilot, OMB’s August 2017 report to Congress, M-18-23: Shifting from Low-Value to High-Value Work and M-18-24: Strategies to Reduce Grant Recipient Reporting Burden. We also interviewed staff from agencies involved in administering and executing the pilot on how they carried out their responsibilities. These agencies included the Department of Health and Human Services (HHS), OMB’s Offices of Federal Financial Management (OFFM) and Federal Procurement Policy (OFPP), and the General Services Administration (GSA). To assess the extent to which the grants portion of the Section 5 Pilot demonstrated changes in federal award recipients’ reporting burden, we reviewed HHS’ plans. We analyzed the plans compared to information collected from the various test models throughout the pilot. The data we assessed included survey data and analyses. We also assessed whether statements on changes in grantees’ reporting burden made in OMB’s August 2017 report to Congress were supported by documentation. We did this by verifying the statements against supporting information. We determined that the pilot data we reviewed were reliable for the purposes of our work by reviewing the data, tracing them back to underlying agency source documents, and interviewing relevant agency staff. We also interviewed OFFM staff and HHS officials on how the grants portion of the pilot was executed. To assess the extent to which the procurement portion of the pilot demonstrated changes in reporting burden, we reviewed OMB’s plans and compared them to actions OMB took to execute the pilot. We compared OMB’s actions to execute the procurement portion of the pilot against criteria identified in Standards for Internal Control in the Federal Government. We viewed a demonstration of the Central Reporting Portal tool for reporting Davis-Bacon and hydrofluorocarbon (HFC) submissions. GSA developed the portal and OFPP provided oversight for the portal’s development. We also reviewed documentation including HFC reporting submissions made through the portal. In addition, we interviewed OFPP staff, GSA officials responsible for administering the portal, and three contracting officials from GSA who were assigned to participate in the Davis-Bacon component of the procurement portion of the pilot regarding their actions related to implementing the procurement portion of the pilot. We conducted this performance audit from November 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of Test Models from Grants Portion of the Section 5 Pilot This appendix provides detailed information regarding the test models from the grants portion of the Section 5 Pilot. The Common Data Element Repository Library 1 Test Model The Common Data Element Repository (CDER) Library is an online repository for federal grants-related data standards, definitions, and context. The library is intended to be an authorized source for data elements and definitions for use by the federal government and for recipients reporting grant information. Hypothesis: If grant recipients are provided with definitions of data elements through the CDER Library, then they will be able to accurately complete forms in a timely manner. Methodology: The Department of Health and Human Services (HHS) divided test model participants into two groups to read a scenario based on the grants lifecycle and complete a data collection tool. The first group used the CDER Library to complete the data collection tool while the second group used all other available sources to complete the data collection tool. After completion of the data collection tool, test model participants filled out a survey about their experiences using the CDER Library. Test Model Metrics: Accuracy and completeness of captured data within a period of time and survey results. Example of Test Model Results: On average, test model participants that completed a data collection tool using the CDER Library scored 11 percent higher in the accuracy of information requested and, on average, spent 6 fewer minutes when completing the tool. Number of Test Model Participants: Fifty-nine. The Common Data Element Repository Library 2 Test Model The CDER Library 2 Test Model focused on identifying duplication in grant forms and data elements across the federal government based on the data standards, definitions, and context within the CDER Library 1. Hypothesis: If duplication across forms can be identified using the CDER Library, then agencies can update or reduce forms to reduce grant recipient burden. Methodology: HHS conducted an internal analysis of SF-424 form families, using the CDER Library, to identify duplication in data elements to determine which forms could be consolidated. Test Model Metrics: Number of duplicative fields within form families and across forms for selected federal entities Example of Test Model Results: The internal analysis conducted by HHS identified 371 instances of data element duplication across 10 agency grant funding applications when using standardized data elements from the CDER Library 1. Number of Test Model Participants: Not Applicable; the CDER 2 Library Test model did not collect information from test model participants because the test model was an internal document review. The CDER Library 2 test model tested the utility of the data element definitions within the CDER Library 1. The Consolidated Federal Financial Report Test Model The Consolidated Federal Financial Report Test Model focused on examining the potential early validation of consolidated CFFR data and potential future streamlining of the close-out process by allowing the submission of Federal Financial Report (FFR) data in one system, rather than in multiple entry systems. Hypothesis: If grant recipients can enter complete FFR information systematically through one entry point instead of multiple different avenues and that information could be shared electronically from that point forward, then grant recipient burden will be reduced and data accuracy will be improved. Methodology: HHS surveyed Administration for Children and Families grant recipients on their experience submitting a consolidated FFR via HHS’s Payment Management System, and grantees on their perceptions of the process for using a consolidated FFR through facilitated discussions. Test Model Metrics: Survey results. Example of Test Model Results: Sixty-four percent of the CFFR test model participants reported that submitting their FFR through a single system would result in reduced reporting time. In addition, 65 percent of the CFFR test model participants believed using the payment management system for submitting FFR data would improve the accuracy of the information they submitted. Number of Test Model Participants: One-hundred fifteen tested the pilot environment and 30 participated in the facilitated discussions. The Single Audit Test Model The Single Audit Test Model consisted of (1) an audit and opinions on the fair presentation of the financial statements and the Schedule of Expenditures of Federal Awards; (2) gaining an understanding of and testing internal control over financial reporting and the entity’s compliance with laws, regulations, and contract or grant provisions that have a direct and material effect on certain federal programs (i.e., the program requirements); and (3) an audit and an opinion on compliance with applicable program requirements for certain federal programs. The Single Audit Test Model focused on reducing reporting of data on duplicative forms. Hypothesis: If grant recipients do not have to report the same information on duplicative forms—for example, the SEFA compared to the Single Audit Report Package and Data Collection Form—then grant recipients’ burden will be reduced. Methodology: HHS collaborated with the Office of Management and Budget’s Office of Federal Financial Management and the Department of Commerce Federal Audit Clearinghouse (FAC) to create a pilot environment for test model participants to submit key portions of a modified Standard Form—Single Audit Collection. HHS conducted two focus groups with test model participants subject to the Single Audit. The first focus group discussed and completed a survey on the new form. The second group, a sample of test model participants who are subject to perform a Single Audit submitted the existing form in the FAC pilot environment, completed a separate data collection form similar to the new form, and completed a survey on the effectiveness and burden of the new form. Test Model Metrics: Focus group feedback and survey results. Example of Test Model Results: All test model participants with access to the Single Audit’s pilot environment believed the upload feature for reporting requirements could decrease duplication in required grant reporting. Number of Test Model Participants: Thirteen tested the pilot environment and 123 participated in facilitated discussions. The Notice of Award Test Model This model focused on the feasibility of developing a standardized Notice of Award (NOA) to reduce reporting burden and facilitate access to standardized data needed to populate Single Audit information collection. Hypothesis: If grant recipients have a standardized NOA for federal awards, then grant-reporting burden may be reduced for recipients by standardizing access to data needed to populate information collections. Methodology: HHS divided test model participants into two groups and completed a data collection tool. The first group completed the data collection tool using three standardized NOAs, while the second group completed the data collection tool using three non-standardized NOAs. After completion of the data collection tool, test model participants self-reported their respective times to complete the data collection tool. They also filled out a survey about the standardized NOA’s impact on reporting burden and provided input on elements to include in a standardized NOA. Test Model Metrics: Self-reported form completion time, accuracy, and survey results. Example of Test Model Results: Test model participants with access to the standardized NOA coversheets spent an average of 3 minutes less when completing the test model’s data collection tool. Number of Test Model Participants: One-hundred four. The Learn Grants Test Model The Learn Grants Test Model is a website on Grants.gov that summarizes and provides links to new and important grants information such as policies, processes, funding, and other information needed throughout the grants life cycle. The website intended to make it easier for stakeholders to find, learn about, and apply for federal grants and promote the standardization of grants terminology and data. Hypothesis: If grant recipients are supplied with grants life cycle information in one website, then they will have increased access to grants resources and knowledge of the grants life cycle process. Methodology: HHS developed a grants knowledge quiz from information on the Learn Grants website. HHS administered the knowledge quiz to test model participants in two phases. First, test model participants completed the knowledge quiz using existing knowledge and without the Learn Grants website. Next, test model participants completed the knowledge quiz with access to the Learn Grants website. HHS compared the results from both knowledge quizzes. After completion of the knowledge quiz, test model participants completed a survey on the usefulness of the Learn Grants website and its impact on increasing knowledge quiz scores. Test Model Metrics: Knowledge quiz accuracy and survey results on the usefulness of Learn Grants website. Example of Test Model Results: Test model participants experienced an average 10 percent (one quiz point) increase in their grant knowledge quiz scores when using the Learn Grants website. New grantees who participated in the test model also reported that the Learn Grants website provided useful grants information. Number of Test Model Participants: Fifty-seven. Appendix III: Assessment of Test Models in the Procurement Portion of the Section 5 Pilot Appendix III: Assessment of Test Models in the Procurement Portion of the Section 5 Pilot Hypothesis not tested. Hypothesis: Verification of FAR standards for post award reporting will confirm the value of existing data standards and reduce variations that will, in turn, reduce contractor burden and cost. Original plan (Davis-Bacon): OFPP planned to execute this test model through focus groups. According to OFPP, no focus groups were conducted. Revised Strategy (HFC): This hypothesis could not be tested through HFC reporting because it was a reporting requirement without an existing reporting method through which to compare reporting burden. 3. Prepopulate data into the Central Hypothesis not tested. Reporting Data Original Strategy (Davis-Bacon): OFPP planned to test this hypothesis by gathering data on the time it takes to submit reporting data through the Central Reporting Portal and outside of the portal, with self-reported data from contractors. According to OFPP, data were not collected due to a lack of participation in the Davis-Bacon portion of pilot. Revised Strategy (HFC): This hypothesis could not be tested through HFC reporting because it was a reporting requirement without an existing reporting method through which to compare reporting burden. Procurement Test Model and Hypothesis GAO’s Assessment 4. Consolidate data collection and Hypothesis not tested. access (proof of concept) Hypothesis: If contractors can enter FAR-required reporting data systematically through one entry point instead of multiple different avenues, and that information can be shared electronically with appropriate individuals, then contractor burden will be reduced and data access improved. Assessment Rationale Original plan (Davis-Bacon): OFPP planned to test this hypothesis by gathering data on the time it takes to submit reporting data through the Central Reporting Portal and outside of the portal, with self-reported data from contractors. OMB also planned to conduct guided discussions. According to OFPP, data were not collected due to a lack of participation in the Davis-Bacon portion of pilot. Revised Strategy (HFC): This hypothesis could not be tested through HFC reporting because it was a reporting requirement without an existing reporting method with which to compare reporting burden. 5. Central Reporting Portal can Hypothesis not tested, but metric associated with test model was met. Hypothesis: If interfaces can be built to support access to other reporting systems, contractor burden will be reduced. Original plan (Davis-Bacon): According to OFPP staff, the Davis-Bacon part of the Central Reporting Portal was able to provide prepopulating of data by interfacing with other reporting systems or drop down menus for all reporting fields. However, it could not demonstrate that such prepopulation resulted in a reduction of contractor burden. Revised Strategy (HFC): This is not applicable for HFC reporting which is reported through open fields. Although OFPP did not actually test the hypothesis associated with this test model, it did meet the metric that it had associated with the test model in its pilot plan. That metric is to develop prepopulating capabilities in the Central Reporting Portal by interfacing with other reporting systems. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Peter Del Toro, Assistant Director; Silvia Porres-Hernandez, Analyst-in-Charge; Jazzmin Cooper; and Jimmy Nunnally made major contributions to this report. Also contributing to this report in their areas of expertise were Michael Bechetti, Jenny Chanley, Mike LaForge, Carl Ramirez, Stewart Small, Andrew J. Stephens, James Sweetman Jr., and Tatiana Winger.
Why GAO Did This Study The DATA Act required OMB or a designated federal agency to establish a pilot program to develop recommendations for reducing recipient reporting burden for federal grantees and contractors. The grants portion of the pilot tested six ways to reduce recipient reporting burden while the procurement portion focused on testing a centralized reporting portal for submitting reporting requirements. This report follows a 2016 GAO review on the design of the pilot. This report assesses the extent to which (1) the pilot met the statutory requirements set out in the DATA Act, (2) the grants portion of the pilot demonstrated changes in reporting burden, and (3) the procurement portion demonstrated changes in reporting burden. GAO reviewed statutory requirements, pilot plans, agency data and reports and interviewed OMB staff and officials from HHS and GSA. What GAO Found In response to requirements of the Digital Accountability and Transparency Act of 2014 (DATA Act), the Office of Management and Budget (OMB) led implementation of a pilot program, known as the Section 5 Pilot, aimed at developing recommendations for reducing recipient reporting burden for federal grantees and contractors. The pilot program met many, but not all, of its statutory requirements. For example, the act required OMB to issue guidance to agencies for reducing reporting burden for federal award recipients (including both grantees and contractors) based on the pilot's findings. OMB partially met this requirement because the guidance it issued only applied to grants. The pilot program consisted of two parts, which differed considerably in both design and results: The grants portion, administered by the Department of Health and Human Services (HHS), examined six approaches for reducing grantee reporting burden and found positive results related to reductions in reporting time as well as reduced duplication. HHS incorporated ongoing stakeholder input during the pilot, and its findings contributed to government-wide initiatives related to federal reporting and reducing grantee-reporting burden. The procurement (contracts) portion of the pilot, led by OMB with assistance from the General Services Administration (GSA), did not collect sufficient evidence to determine whether centralizing procurement reporting through a single web-based portal would reduce contractor reporting burden—a key objective of the pilot. The pilot planned to test the portal by collecting weekly Davis-Bacon wage data from a minimum of 180 contractors, potentially resulting in thousands of submissions over a year. However, in the end, the pilot did not result in any Davis-Bacon data due to lack of contractor participation and the absence of iterative and ongoing stakeholder engagement. Subsequently, OMB expanded the pilot to include hydrofluorocarbon (HFC) reporting but received only 11 HFC submissions. (See figure.) In addition, HFC reporting was not suited for assessing changes in reporting burden because it was a new requirement and thus no comparative data existed. OMB plans to expand its use of the portal for additional procurement reporting requirements but still does not have information from stakeholders that could help inform the expansion. What GAO Recommends GAO recommends that the Director of OMB ensure that information is collected regarding how centralized reporting of procurement requirements might reduce recipient reporting burden—including input from stakeholders such as contractors through an iterative and ongoing process—to inform OMB's planned expansion of the Central Reporting Portal. OMB neither agreed nor disagreed with the recommendation but provided technical comments, which GAO incorporated as appropriate.
gao_GAO-20-185
gao_GAO-20-185_0
Background Federal and Other Stakeholder Roles in Surface Transportation The Aviation and Transportation Security Act designated TSA as the primary federal agency responsible for security in all modes of transportation. Public and private transportation entities have the principal responsibility to carry out safety and security measures for their services. As such, TSA coordinates with these entities to identify vulnerabilities, share intelligence information, and work to mitigate security risks to the transportation modes. See table 1 for examples of the entities TSA works with to secure the various surface transportation modes. TSA’s Surface Programs Account TSA’s Surface Programs’ Program, Project, or Activity (Surface Programs account) supports TSA programs that are to protect the surface transportation system. According to DHS’s Congressional Budget Justifications, this account received about $113 million on average annually from fiscal years 2009 through 2018, about 1.5 percent of TSA’s average annual appropriation of more than $7 billion. During that time, the appropriations directed to the Surface Programs account ranged from about $63 million to nearly $135 million annually. For example, in fiscal year 2018, TSA’s Surface Programs account received about $129 million, which was less than 2 percent of TSA’s appropriation (see figure 1). In addition, the Surface Programs account staff (full-time equivalents) ranged from 353 to 843 annually from fiscal years 2009 through 2018, consistently representing between 0.68 and 1.53 percent of TSA’s total staff. TSA’s Intermodal Security Training and Exercise Program I-STEP was created in response to provisions in the Implementing Recommendations of the 9/11 Commission Act of 2007. According to PPE, the I-STEP program offers three main services: Exercise Management Services assist transportation operators, emergency responders, local law enforcement, and government officials in enhancing security preparedness and resilience; Training Support Services help partners improve security awareness, training gaps, security plans, emergency procedures, and incident management skills; and Security Planning Tools and Services help partners gain an understanding of transportation security lessons learned and best practices to inform risk-based decision-making. The program conducts multi-agency, multi-jurisdictional activities ranging from seminars to full-scale exercises. Seminars provide a starting point for industry stakeholders developing or making major changes to their plans and procedures. Full-scale exercises deploy personnel and resources for real-time scripted events that focus on implementing and analyzing plans, policies, and procedures. The voluntary exercises are conducted across surface transportation modes including mass transit, passenger and freight rail, highway, and pipeline. TSA Allocated Most Surface Program Resources to Three Offices, and Some Were Used for Non- Surface Activities in Fiscal Years 2017 and 2018 TSA’s Surface Programs account received $123 million in fiscal year 2017 and $129 million in fiscal year 2018, according to DHS. Surface activities are primarily carried out by three TSA offices—Security Operations; Law Enforcement/Federal Air Marshal Service; and Policy, Plans, and Engagement. TSA reported that these offices were collectively allocated about 99 percent of the funding in TSA’s Surface Programs account in fiscal year 2017 and 93 percent in fiscal year 2018. Security Operations (SO). This office is to provide risk-based security that includes regulatory compliance and other programs designed to secure transportation. Within SO, surface transportation security inspectors, known as surface inspectors, conduct a variety of activities to implement TSA’s surface transportation security mission. These activities are to include (1) regulatory inspections for freight and passenger rail systems, (2) regulatory Transportation Worker Identification Credential inspections, and (3) non-regulatory security assessments and training which surface transportation entities participate in on a voluntary basis. Law Enforcement/Federal Air Marshal Service (LE/FAMS). This office is to conduct protection, response, detection, and assessment activities in transportation systems. For example, LE/FAMS administers the Visible Intermodal Prevention and Response (VIPR) program. Since late 2005, TSA has deployed teams to conduct VIPR operations as a way to augment security of and promote confidence in surface transportation systems. These capabilities can include random bag searches and law enforcement patrols at mass transit and passenger rail systems to deter potential terrorist threats. Policy, Plans, and Engagement (PPE). This office is to develop and coordinate both domestic and international multimodal transportation security policies, programs, directives, strategies and initiatives, while overseeing engagement with industry stakeholders and associations. For example, each modal section within PPE—mass transit, passenger and freight rail, highway, pipeline, and maritime—is to be responsible for outreach to their respective industry and with federal security partners. Their primary role is to align industry interests and actions with the TSA mission. The modes are to share intelligence and information with the industry to develop a shared understanding of risks, conduct vulnerability gap analysis, develop security policy, share best practices, provide risk mitigation and training tools, and conduct drills and exercises. These TSA offices further allocate surface program resources within their respective offices to carry out surface transportation activities (see table 2). Within PPE’s Surface Division, PPE reported allocating six Surface Program account staff to each surface transportation mode office— mass transit and passenger rail, freight rail, highway and motor carrier, and pipeline—in fiscal years 2017 and 2018. TSA may realign funds within an appropriation account through reprogramming and also has limited authority to realign funds between appropriation accounts through transfers, pursuant to its appropriations acts and subject to notification provisions. According to TSA officials, TSA reprogrammed or transferred the following surface transportation resources enacted from fiscal years 2017 through 2019: In fiscal year 2018, TSA reprogramed $5 million from Surface Programs to Mission Support activities to address security requirements and increase hiring of transportation security officers. Transportation security officers conduct security screening of passengers, baggage, and cargo at airports to prevent any deadly or dangerous objects from being transported onto an aircraft. In fiscal year 2018, DHS transferred $100,000 from the Surface Program account to (1) the Immigration and Customs Enforcement’s Custody Operations account to provide adequate funding for detention beds, (2) Immigration and Customs Enforcement’s Transportation Removal Program account to support transportation and removal activities for migrants, and (3) the U.S. Secret Service’s Protection of Persons and Facilities account to support upgrading protections for the White House. In fiscal year 2019, DHS transferred over $6 million to the Immigration and Customs Enforcement’s Custody Operations and Transportation Removal Program accounts for the same purposes. In fiscal year 2019, TSA reprogrammed $200,000 from Mission Support and Secure Flight to Surface Programs to ensure sufficient funds were available to make payroll payments to employees during the fiscal year 2019 government shutdown. Staff funded from the Surface Programs account may be used for aviation-related activities. For example: TSA funds VIPR teams from the Surface Program account; however, VIPR teams are often used for aviation security activities. TSA’s program guidance stated they use a risk-based approach to prioritize and schedule VIPR program operations. According to TSA, in fiscal year 2017, 41 percent of VIPR program operations were conducted in surface modes and 59 percent were conducted in aviation security. In fiscal year 2018, TSA reported that 61 percent of VIPR program operations were conducted in surface modes and 39 percent were conducted in aviation security. TSA also funds surface inspectors and their supervisors from the Surface Program account; however, surface inspectors can assist with aviation-related activities, as we reported in 2017. At that time, we found that TSA had incomplete information on the total time surface inspectors spent on those activities because of limitations in TSA’s data system. Since then, TSA updated its system to include a field indicating whether the activity was conducted in the surface or aviation mode, demonstrating that TSA has visibility over all activities surface inspectors conduct. In fiscal year 2018, TSA reported that surface inspectors spent about 16 percent of hours on aviation-related activities. TSA’s Guidance for Its Training and Exercise Program Does Not Fully Establish Coordination Procedures and Time Frames TSA’s 2016 Surface Division Internal Operating Procedure details the planning and implementation process of I-STEP, but does not fully identify the roles and responsibilities for key TSA offices or time frames for when those offices should coordinate to support training and exercise planning. PPE has primary responsibility for planning and implementing I-STEP under the procedure and coordinates with other TSA offices to facilitate exercises and accomplish the program’s goals. Specifically, PPE officials stated that SO and the Intelligence and Analysis (I&A) offices, have important roles in helping PPE to plan and conduct tabletop exercises using I-STEP’s online exercise tool to facilitate planning in the field. For example, PPE officials stated that SO conducts external outreach to surface transportation stakeholders to identify participants and exercise locations, and I&A provides intelligence briefings that give background context to participants. The roles and responsibilities of SO and I&A are not captured in the operating procedure in part because program responsibilities have changed since the procedure was issued in 2016. For example, the operating procedure describes PPE’s primary responsibility for industry engagement, but does not discuss SO’s surface inspectors’ role in stakeholder and industry outreach for I-STEP. Specifically, surface inspectors reach out to industry stakeholders to identify participants interested in conducting an exercise. Surface inspectors also help handle logistics, such as coordinating with local responders and stakeholders. However, the operating procedure has not been updated since 2016 to capture this transition of SO responsibilities. In the absence of a policy that clearly defines all current offices that should coordinate and when, PPE may also be missing consistent input and important information from relevant offices across TSA. For example, PPE officials indicated that I&A officials can support I-STEP exercises by providing intelligence briefings, when requested, and can assist at or before initial PPE planning meetings. However, I&A officials stated that they do not typically participate in the PPE planning meetings that help identify and prioritize exercises based on risk-based intelligence documents, because they are not consistently invited to attend. Further, according to I&A officials, they sometimes receive a few weeks’ notice, or no notice at all to prepare intelligence briefings for upcoming exercises. I&A officials explained that while they have supported exercise planning, there is no formal role for the office in the procedure or expected time frames for providing information. Our Standards for Internal Control in the Federal Government states management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives. Management then develops the overall responsibilities from the entity’s objectives that enable the entity to achieve its objectives. TSA officials stated that they plan to revise the 2016 Surface Division’s Internal Operating Procedure. This planned revision presents an opportunity to identify and clarify roles and responsibilities for all offices involved in the coordination of the exercise, including when they should to coordinate. Conclusion TSA allocates resources for surface transportation activities, including I- STEP voluntary training and exercises with system operators and governmental security partners. While PPE coordinates with several offices across TSA to accomplish the program’s goals, coordination guidance could be improved. Although PPE has discussed the roles and responsibilities for offices outside of PPE, how and when these offices should coordinate has not been clearly defined in its sole guidance document. As a result, TSA may be missing input and information from relevant offices. Formalizing planning responsibilities, specifically with I&A, would allow for consistent involvement in the planning process and give analysts more time to prepare intelligence briefings for exercises. Also, with surface inspectors performing stakeholder outreach in addition to PPE’s primary role for industry engagement, formalizing planning and external outreach roles and responsibilities for SO would ensure consistent outreach in the field. Recommendation for Executive Action We are making the following recommendation to TSA: The TSA Administrator should clarify roles and responsibilities for all offices involved in the coordination of surface transportation exercises, including when these offices are to coordinate, as part of the planned revision of the Surface Division’s Internal Operating Procedure for I- STEP. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to DHS. DHS provided written comments, which are reproduced in Appendix I. In their comments, DHS concurred with the recommendation and described actions planned to address it, including an estimated timeframe for completion. If fully implemented, these actions should address the intent of the recommendation and better position TSA’s offices to execute roles and responsibilities for planning and implementing I-STEP. TSA also provided technical comments, which we incorporated as appropriate We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or RussellW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Ellen Wolfe (Assistant Director), Amber Edwards (Analyst-in-Charge), Lilia Chaidez, Dominick Dale, Tracey King, Leah Nash, Natasha Oliver, and Michael Silver made key contributions to this report.
Why GAO Did This Study The global terrorist threat to surface transportation–freight and passenger rail, mass transit, highway, maritime and pipeline systems–has increased in recent years, as demonstrated by a 2016 thwarted attack on mass transit in New Jersey and the 2017 London vehicle attacks. TSA is the primary federal agency responsible for securing surface transportation in the United States. The FAA Reauthorization Act of 2018 includes a provision that GAO review resources provided to TSA surface transportation programs and the coordination between relevant entities related to surface transportation security. This report addresses TSA's: (1) allocation of resources to surface transportation programs for fiscal years 2017 and 2018; and (2) coordination within TSA to implement the Intermodal Security Training and Exercise Program. GAO analyzed TSA data on surface program resources for fiscal years 2017 and 2018, reviewed TSA program guidance, and interviewed TSA officials responsible for implementing the Intermodal Security Training and Exercise Program. This program is intended to assist transportation operators and others in enhancing security through exercises and training. What GAO Found Transportation Security Administration (TSA) reported allocating most of its surface transportation program account, which was $123 million in fiscal year 2017 and $129 million in fiscal year 2018--to three offices (see figure). The surface program account represented about 1.6 percent of the agency's appropriation in both fiscal years, according to Department of Homeland Security data. Security Operations is to conduct regulatory inspections for freight and passenger rail systems, non-regulatory security assessments, and voluntary training. Law Enforcement/Federal Air Marshal Service is to administer the Visible Intermodal Prevention and Response (VIPR) Program to augment the security of and promote confidence in surface transportation systems. Policy, Plans, and Engagement (PPE) is to develop and coordinate security policies, programs, directives, strategies, and initiatives, while overseeing industry engagement. In fiscal years 2017 through 2019, TSA reported using surface program resources for non-surface activities. For example, in fiscal year 2018, TSA reprogrammed $5 million from the Surface Programs account to the Mission Support account to address security requirements and increase hiring of transportation security officers. In that same year, about 39 percent of VIPR operations were conducted in aviation security. TSA has not fully identified coordination roles and responsibilities for its training and exercise program for offices outside of PPE—the office with primary responsibility for the program. PPE coordinates with several other offices to accomplish the program's goals, including the Intelligence and Analysis (I&A) office that provides intelligence briefings that give background context during program exercises. I&A officials explained that while they have supported exercise planning, there is no formal role for the office in the procedure or expected time frames for providing information. As a result, I&A officials stated that they do not typically participate in the PPE planning meetings because they are not consistently invited to attend. In the absence of a policy that clearly defines all current offices that should coordinate and when, PPE may be missing consistent input and important information from relevant offices across TSA. What GAO Recommends GAO recommends that TSA clarify roles and responsibilities for all offices involved in the coordination of surface transportation exercises, including when these offices are to coordinate. DHS concurred with the recommendation.
gao_GAO-19-500
gao_GAO-19-500_0
Background Throughout the course of a construction project, small and large contract changes can be expected after the contract is awarded. These changes are made through modifications to a contract. There are two types of contract changes discussed in this report: bilateral and unilateral. Bilateral change. A bilateral change (also called a supplemental agreement) is a contract modification that is signed by the contractor and the contracting officer. In these cases, the contractor and contracting officer come to an agreement on the price of a contract change prior to the execution of work. Unilateral change. The contracting officer may direct a unilateral change, executed through a change order, without the contractor’s agreement on the terms and conditions of the change. A unilateral contract modification is signed only by the contracting officer. The contractor is generally required to perform the related work. When change orders do not include an agreed-upon price for the work, they may also be referred to as an unpriced change. If a contract change causes an increase or decrease to the cost of performing the work or the scheduled time for performing the work, the contractor will communicate these price and schedule changes to the contracting officer. For there to be an adjustment to the contract’s price, the contractor must submit a specific request or proposal seeking reimbursement for the change. If the contract change has been ordered unilaterally by the government, the contractor may submit a request for equitable adjustment (REA) that reflects these cost and schedule changes and requests reimbursement. In other circumstances, the contractor may submit a proposal in response to a request by the agency that similarly reflects the contractor’s estimate for that increased or decreased cost and the schedule changes. Bilateral and unilateral contract changes typically begin with a similar set of activities, but then the processes diverge once the bilateral or unilateral determination is made. Initial process steps include: identifying the need for a change; determining that the change is within the scope of the existing receiving a cost estimate; and verifying that funds are available for the change. It is generally after this point that the contracting officer determines the type of change—unilateral or bilateral—required. See figure 1 for a notional representation of a change process. Individual contract changes may involve circumstances and process steps that are not outlined below. Agency regulations and policies provide additional direction for managing the construction contract change process (see table 1). Prior GAO Work, Industry Concerns, and Recent Congressional Action In prior work at the Department of Veterans Affairs (VA), we identified challenges and made several recommendations related to the time required for the construction contract modification process. In 2013, we found that VA had not developed guidance to ensure that change orders were approved in a prompt manner, and recommended that officials implement guidance on streamlining the change-order process. VA agreed with our recommendations and has implemented them. In 2017, we found that VA did not collect sufficient information to determine if new guidelines intended to ensure the timely processing of change orders were being followed. We also found that it did not have a mechanism in place to evaluate data on time frames to process change orders. Without such a mechanism, VA could not determine how processing time frames and design changes affect costs and schedules, and thus was at risk for unexpected cost increases and schedule delays. We recommended that VA establish a mechanism to monitor the extent that major facilities projects were following guidelines on change orders’ time frames and design changes. VA has also addressed this recommendation. In 2018, we found that the Veterans Health Administration, a component of the VA, had not established time frames for processing contract changes, and did not have a way to monitor the length of time or the reason contract changes occur. We recommended that officials collect information on contract modifications, establish target time frames that trigger a higher-level review of contract modifications, and centrally establish a mechanism to monitor and review certain contract modifications that were taking longer than the established target time frame. To date, the Veterans Health Administration has not yet fully implemented the recommendations. At a May 2017 congressional hearing before two subcommittees of the House Committee on Small Business, witnesses from the construction industry identified the contract change process as a challenge. They stated that the change process negatively affects cash flows, increases administrative and legal costs, and creates a risk of not receiving reimbursement for completed work. Industry representatives we spoke with reiterated these concerns. Industry representatives also explained that while contract changes were a challenge for businesses of all sizes, small business were likely to be more susceptible to challenges due to their having fewer financial and administrative resources. One resource for small businesses is an agency’s Office of Small and Disadvantaged Business Utilization or Office of Small Business Programs. These offices are responsible for working with agency officials to facilitate participation of small businesses in procurement. However, the small business advocates at GSA and USACE told us that their offices had a limited role in the construction contract change process. According to small business advocates at GSA, for example, their office may get involved in a limited manner when a small business contractor is having difficulty receiving payment by providing guidance on how to make a claim. Congress recently took action that will prompt agencies to gather information on the time it takes to make certain contract changes. Section 855 of the Fiscal Year 2019 National Defense Authorization Act includes a provision that requires agencies to make available information about the agency’s past performance in finalizing, or “definitizing,” REAs with certain construction solicitations. The provision also requires agencies to provide information about its policies and practices associated with how the agencies comply with Federal Acquisition Regulation requirements to definitize REAs in a timely manner. Agencies must start including this information no later than August 13, 2019. Multiple Factors Affect Time Frames for Finalizing Contract Changes A variety of factors affect how long it takes to process a contract change. The factors include the time needed for making a change determination, creating a cost estimate, identifying funds, negotiating with the contractor, completing reviews, and processing the change. According to agency officials, some of these steps play a role in protecting the government’s best interests. For example, creating robust cost estimates helps provide the government with information to inform negotiations with the contractor. Unauthorized work—resulting from unauthorized direction or miscommunication—is another factor that can affect the change process timelines. When the contractor performs unauthorized work, the agency must then take additional steps, such as reviewing the work to determine if it should be reimbursed. Data we reviewed from USACE indicate that a majority of contract changes made from January 2013 through August 2018 were finalized in fewer than 60 days, and a little more than 3 percent took more than 1 year. Contractors and the government sometimes have different perceptions about when the contract change process begins—and therefore how long it takes—based on when the change begins to impact the work. Contract Change Steps Add Time to the Process The construction contract change process includes a number of steps that can factor into the time frames for finalizing a contract change, depending on the facts and circumstances surrounding an individual change. For example, USACE officials stated that obtaining a complete proposal from the contractor—with sufficient information on cost and schedule changes to begin negotiations—is a significant factor affecting contract change time frames. Figure 2 illustrates where these factors fall in a notional change process and describes how they may affect time frames. Agency contracting officials at both PBS and USACE note that some of these procedural steps are necessary to protect the government’s interests—which includes negotiating a fair and reasonable price for the work related to the change. According to USACE and PBS contracting officials, any unauthorized work undertaken by the contractor is another factor that can extend contract change process timelines. When unauthorized work is done, the government must take steps such as determining (1) if the work was required; (2) if the work constituted a change to the existing contract; and (3) if so, a fair and reasonable price for the work. Unauthorized work may occur, for example, when the contractor receives direction from a person who is not authorized to direct work, like a project manager. An authorized individual, such as the contracting officer, must provide such direction. Agency officials explained that unauthorized work can be the result of miscommunication between a government project official and the contractor. The contractor may interpret instructions from the unauthorized official as a formal direction to proceed with a change. In other cases, the contractor may begin work in anticipation of a contract change, before receiving any direction at all. One contractor representative told us that, at times, contractors feel pressured to start work without authorized direction to avoid disruption to the overall project that may result in negative performance reviews from the agency. USACE Data Show That More than Half of Construction Contract Changes Are Finalized Within 60 Days, but Some Take Much Longer According to USACE contracting officials, the agency compiles and reviews data on construction contract changes on an ad hoc basis to gain insight into time frames for the contract change process within that agency. The data and analysis show that the majority of changes from 2013 through 2018 at that agency were finalized within 60 days; however, a smaller percentage took substantially longer. Approximately 45 percent of the completed contract changes took more than 60 days to finalize, and a little more than 3 percent took more than 1 year. See figure 3 for information on USACE contract changes by the number of days taken to finalize the change. Agency Officials and Industry Representatives Report Differing Perceptions of When the Process Begins Contracting officials at USACE, as well as industry representatives, told us that government officials and contractors often have different perspectives on when the contract change process begins and, therefore, the time needed to complete it. For example, one industry representative said that the process begins for some contractors when the need for a contract change is identified. The representative explained that this is the point that the project work can change and the contractor begins to experience an impact on cost and schedule. Another industry representative said that some businesses think that the process begins when they submit their request for equitable adjustment, but that the government may not start measuring the process until a government official actively begins to address the request. Meanwhile, USACE contracting officials stated that process time should be measured from when they receive a complete proposal from the contractor, with no missing information. USACE officials told us that the data collected in its contract information system do not always reflect this metric, however. USACE contracting officials told us that, when recording the proposal receipt date that it uses as the start date for the contract change process, some contracting officers enter the date that the initial proposal was received, and others enter the date that a complete proposal was received. USACE contracting officials stated that they plan to address this issue in the future as part of a larger system upgrade. An industry representative explained that these varying viewpoints between government contracting officials and contractors are exacerbated by the contractors’ lack of understanding about the contract change process. The representative also stated that contractors find that the process is not transparent and implementation of the process varies by agency and even by district within the same agency, increasing confusion. Selected Agencies Do Not Regularly Monitor Contract Change Time Frames While the amount of information on contract changes varies between USACE and PBS, neither agency regularly monitors contract change time frames. In addition to agency guidance that establishes time frames for certain contract change order actions, federal standards for internal control state that an organization should obtain quality information to achieve management objectives and establish monitoring activities. Neither GSA nor USACE has fully established such controls over the contract change process at the headquarters level, limiting management’s ability to identify and respond to problems. USACE information systems have data on contract changes for its more than 40 districts that are sufficient to calculate time frames for finalizing contract changes, but the agency does not regularly aggregate or monitor the information. Officials explained that this was in part due to the manual process required to compile the data centrally and perform calculations. A user must pull data for each USACE district from its contract information system and then manually manipulate the data to determine the time frames. As a result, the data are not reviewed by officials at headquarters on a routine basis. The contracting officials we spoke with said that contract change time frames are reviewed at the local level, specifically by project teams, typically on a weekly basis. Contracting officials also stated that contract change time frames are a factor in performance reviews for contracting personnel. There is currently no agency guidance or documentation for how often contract changes should be reviewed at either the project or district levels, the officials said. USACE contracting officials noted that they are in the early stages of planning for a system upgrade that they hope will automate the process of compiling and analyzing construction contract change data. However, these plans are preliminary. USACE has not yet determined which systems will be involved, nor has it documented these planning efforts to date. PBS contracting officials cannot track time frames for contract changes. While GSA’s contract information system does track and centrally compile data on all contract modifications, PBS contracting officials said there was no efficient way to separate the types of contract changes that we included in our review from other modifications, such as administrative changes or the exercise of options, preventing the calculation of time frames for contract changes. Our review of the GSA data confirmed that the data cannot be used to distinguish between the various types of contract changes. According to PBS contracting officials, to identify a contract change type, a reviewer would have to seek information at the local level by going into the individual contract file and reviewing the modification. Given these limitations, USACE and PBS cannot centrally identify emerging problems with contract change time frames or monitor compliance with existing Department of Defense (DOD) and GSA requirements. As noted above, DOD and GSA have established time frames for certain contract changes. USACE contracting officials said that they would likely establish additional, broad goals for finalizing contract changes in future policy revisions because more targeted goals were often not practical due to the unique circumstances that may affect process times. PBS contracting officials said that compliance with those time frames should be monitored by local staff, such as the contracting officer assigned to the project; however, there is no regular monitoring of that data or systematic way for contracting officers to track this information at the local level. There is currently no effort under way to develop a strategy to address data limitations at the local and headquarters level via information technology system upgrades, according to GSA officials. Further, USACE and GSA anticipate, and our analysis of available data confirms, that system limitations at both agencies are likely to make implementing section 855 of the Fiscal Year 2019 National Defense Authorization Act more difficult. This provision generally requires agencies to include information on recent time frames for definitizing REAs with any construction solicitations anticipated to be awarded to small businesses no later than August 2019. For example, GSA officials stated that to implement this provision would require substantial changes to their contract information system, which they must plan for 2 years in advance. USACE officials said that staff level discussions were ongoing on potential ways to comply with the requirement. They added, however, that in the absence of a system change making the data readily available, they would likely compile data manually, similar to what was provided to us, as an ad hoc substitute. In addition, both agencies said that they had questions about what information they would include in solicitations. Specifically, while section 855 refers to REAs, a USACE contracting official stated that REA could be interpreted differently by the government and industry. Similarly, GSA contracting officials said that the statutory language potentially covers a broad category of information, making it difficult to decide what data to capture and report. USACE officials stated that they will wait for DOD and the Department of the Army to provide direction before changing their system. GSA officials stated that they were not going to take action until further information is provided. One potential source of additional direction is Federal Acquisition Regulation (FAR) case 2018-020, which is developing a proposed FAR rule to implement section 855. The proposed rule is anticipated to be released in the first quarter of fiscal year 2020. Conclusions Routine, central data collection on the construction contract change process can help agencies understand the scope of any problems encountered. While USACE can compile and review construction contract change information on an ad hoc basis, the agency does not conduct regular monitoring at the headquarters level and must manually manipulate data to review this information. GSA lacks information on the contract change process and its time frames at the headquarters, regional, and local levels. Without regular collection and review of information on the contract change process, contracting officials may be unable to spot potential problems—such as long process times that may affect project schedules—as they occur and respond accordingly. In addition to needing data for management purposes, agencies must also implement new legislative requirements when issuing certain construction solicitations starting in August 2019. While the proposed FAR rule, when issued, should provide agencies with more information on how to implement the new requirements, GSA and USACE could immediately begin to develop strategies to support routine collection and monitoring of time frames. Pursuing preliminary strategies on basic issues—such as what systems may need to be updated and what groups or individuals should be involved—would help these agencies better position themselves to comply with the requirement in a timely manner, and more quickly expand the data available for management purposes. Recommendations for Executive Action We are making the following two recommendations: The Administrator of General Services should ensure that the Commissioner of the Public Buildings Service develops a strategy that outlines the steps needed to routinely collect information on and monitor the time frames for finalizing construction contract changes at the headquarters level. The strategy could address issues such as the types of construction contract changes that should be included, when the measurement of the contract change process should begin, and the information systems that will be affected. (Recommendation 1) The Secretary of the Army should direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to develop a strategy to expand on existing data and systems to routinely collect information on and monitor the time frames for finalizing construction contract changes at the headquarters level. (Recommendation 2) Agency Comments We provided a draft of this product to DOD, GSA, and OMB for comment. DOD and GSA provided written comments, reproduced in appendixes II and III, respectively. DOD concurred with our recommendation and provided a technical comment, which we incorporated as appropriate. GSA also concurred with our recommendation, and noted that the agency is developing a plan to address it. OMB provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, the Acting Secretary of Defense, and the Administrator of General Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report were Tatiana Winger (Assistant Director); Betsy Gregory-Hosler (Analyst-in-Charge); Michael Dworman; Gail-Lynn Michel; Peter Anderson; George Depaoli; Lorraine Ettaro; Lori Fields; Gina Hoover; Sam Portnow; Bill Shear; and Anne Louise Taylor. Appendix I: Objectives, Scope, and Methodology This report (1) identifies factors that affect the time it takes to finalize contract changes at selected agencies, and (2) assesses the extent to which selected agencies monitor time frames for finalizing contract changes. In this report we examined the process for managing unilateral and bilateral contract changes, but exclude certain types of contract modifications to focus on the issues of payments and cash flow challenges. Specifically, we excluded (1) administrative modifications because they do not entail changes to contract costs or time frames; (2) contract changes that go beyond the scope of the existing contract, referred to as cardinal changes; (3) contract options because exercising an existing priced option does not entail the same type of negotiations that unilateral and bilateral changes require; (4) contract disputes and claims because they follow a separate and distinct process; (5) the payment process after a contract change has been finalized because that process is directed by the Prompt Payment Act; and (6) any processes taking place between a prime contractor and its subcontractors because that is outside the focus of this review. To identify agencies for our review, we analyzed Federal Procurement Data System – Next Generation (FPDS-NG) data on construction contract obligations for fiscal year 2017, the most recent data available at the time. This allowed us to identify defense and civilian agencies that had large amounts of construction contract obligations and a relatively significant portion of those obligations going to small business. The data that we used assigned the contract obligations to the agency that managed the construction project rather than the funding agency. We found that the Department of the Army’s U.S. Army Corps of Engineers (USACE) obligated approximately $10.5 billion for construction contracts, with approximately $3.9 billion going to small business concerns. This obligated amount is more than any other federal agency or service within the Department of Defense. We found that the General Services Administration’s (GSA) Public Buildings Service (PBS) obligated approximately $1.9 billion for construction contracts, with approximately $870 million going to small business concerns. To assess the reliability of the FPDS-NG data we used, we (1) performed electronic testing of selected data elements, and (2) reviewed existing information about the FPDS-NG system and the data it produces. Specifically, we reviewed the data dictionary, data validation rules, and other documentation. Based on these steps, we determined the data were sufficiently reliable for the purposes of this report. To identify federal construction industry representatives for this engagement, we collected information on potential associations from various sources including previous congressional testimony and our prior work. From this list of options, we sought organizations that were focused on federal construction contracting, included a small business focus, represented a large number of contractors, and had performed previous advocacy work on the issues of under review in this engagement. Based on these criteria, we selected two organizations to interview: the Associated General Contractors of America and the National Association of Small Business Contractors. The Associated General Contractors of America, which sent a representative to a congressional hearing on the contract change process, represents 26,000 member firms and includes a division dedicated to federal construction as well as a small business committee. The National Association of Small Business Contractors specializes in small business contractors working with the federal government, and is affiliated with the American Small Business Chamber of Commerce. We interviewed representatives from these associations to confirm background information about how the change process impacts industry and further discuss the factors that affect process time frames. To identify the factors which affect the time it takes to finalize contract changes at selected agencies, we reviewed relevant legislation such as the John S. McCain National Defense Authorization Act for Fiscal Year 2019, regulations including the Federal Acquisition Regulation (FAR), the Defense Federal Acquisition Regulation Supplement, GSA Acquisition Regulation, and the GSA Acquisition Manual and relevant agency policies and guidance. We interviewed staff from the Office of Management and Budget’s Office of Federal Procurement Policy—the Administrator of which serves as the Chair of the FAR Council—and contracting officials from the PBS and USACE. In addition, we interviewed officials from GSA’s Office of Small Business Utilization and USACE’s Office of Small Business Programs to discuss their role in the change process and their perspective on possible impacts to small business concerns. To assess the extent to which selected agencies monitor time frames for finalizing contract changes, we collected and reviewed available GSA data on contract modifications. We also collected available data and analysis from USACE on construction contract changes from January 1, 2013 to August 17, 2018—representing more than 62,000 changes from the more than 40 USACE districts and one office that execute construction contracts—obtained from the USACE’s Resident Management System. We reviewed USACE analysis of those data that calculated time frames for the contract changes by measuring the time elapsed from the date a proposal is received to when the contract change is finalized by the signature of Standard Form 30, which officially modifies the contract. We also reviewed system documentation on the requirements for users to enter data into the systems. We interviewed PBS and USACE officials at the headquarters level to discuss the time frames for contract changes, including on how long officials believe the process takes, what data are available, and who reviews any data collected on the contract change process. We discussed the provided USACE data with knowledgeable USACE officials who performed the calculations to understand their process, assumptions, and methodology. We determined the data were sufficiently reliable for the purposes of describing what is known about the time frames for finalizing construction contract changes. We also interviewed an official in GSA’s Office of Government-wide policy, to discuss any GSA-wide plans for system changes. We conducted this performance audit from August 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: Comments from the General Services Administration
Why GAO Did This Study In fiscal year 2018, federal agencies spent more than $36 billion on construction contracts, with more than 45 percent going to small business. Typically, construction projects involve some degree of change as the project progresses. Some federal construction contractors have raised concerns that delays in processing contract changes and making payments creates challenges, particularly for small businesses. Section 855 of the National Defense Authorization Act for Fiscal Year 2019 requires agencies to report information related to how quickly they finalize contract changes. GAO was asked to review federal construction contract change processes and timeframes. GAO (1) identified factors that affect the time it takes to finalize contract changes, and (2) assessed the extent to which selected agencies monitor time frames for finalizing contract changes. GAO reviewed relevant regulations and agency policies, analyzed available data, and interviewed officials from GSA's Public Buildings Service and USACE—two agencies with large amounts of obligations on construction—and two industry associations. What GAO Found Multiple factors affect the time it takes to finalize a construction contract change. For example, preparing cost estimates can be time consuming, particularly for complex changes. Yet the time may be used to help ensure the government has adequate cost data to inform negotiations. In addition, according to agency officials, miscommunication during the contract change process—which can lead to problems such as unauthorized work undertaken by the contractor—can result in additional reviews and longer time frames. According to U.S. Army Corps of Engineers (USACE) data, most of its construction contract changes are finalized within 60 days. Some take much longer, however (see figure). Agency officials and industry representatives agreed that perceptions differ about the length of the contract change process. For example, because a change can impact the contractor's cost and schedule immediately, the contractor typically perceives that the process starts earlier—and lasts longer—than the government does. Neither GSA nor USACE regularly monitors how long it takes to finalize construction contract changes, limiting management's ability to identify and respond to problems. Internal controls require agencies to collect and use quality data for management purposes such as monitoring agency activities. GSA systems do not collect data that permit analysis of contract change timeframes at the headquarters level. USACE systems produce contract change data for its districts, but data consolidation and calculations must be done manually and are not done regularly. Neither agency has a strategy in place to address these issues. Without regular review of these timeframes, USACE and GSA contracting officials may be unaware of any existing or potential problems, such as long process times that may affect project schedules. In addition, these data system limitations are likely to create difficulties for agencies when providing the information required by new legislation. What GAO Recommends GAO is making two recommendations: that GSA's Public Buildings Service and USACE each develop a strategy to routinely collect information on and monitor time frames for construction contract changes at the headquarters level. Both agencies concurred with our recommendation.
gao_GAO-19-327
gao_GAO-19-327_0
Background This section provides an overview of (1) the impact of nuclear or radiological events, (2) U.S. efforts to combat nuclear or radiological smuggling, (3) STC program goals and phases, (4) how the STC program operates, and (5) STC program activities. Impact of Nuclear or Radiological Events We previously reported that a terrorist’s use of either an improvised nuclear device or a radiological dispersal device could have devastating consequences, including not only loss of life but also enormous psychological and economic impacts. An improvised nuclear device is a crude nuclear bomb made with highly enriched uranium or plutonium. A radiological dispersal device —frequently referred to as a dirty bomb— would disperse radioactive materials into the environment through a conventional explosive or through other means. Depending on the type of radiological dispersal device, the area contaminated could be as small as part of a building or a city block or as large as several square miles. If either type of device were used in a populated area, hundreds of individuals might be killed or injured from the explosion or face the risk of later developing health effects because of exposure to radiation and radioactive contamination. U.S. Efforts to Combat Nuclear or Radiological Smuggling U.S. efforts to counter nuclear or radiological threats are considered a top national priority. Federal agencies that have a role in combating nuclear or radiological smuggling are responsible for implementing their own programs under the GNDA. The GNDA comprises programs run by U.S. agencies, including DHS, the FBI, and NNSA, as well as partnerships with local, state, tribal, and territorial governments; the private sector; and international partners. These programs are designed to encounter, detect, characterize, and report on nuclear or radiological materials that are “out of regulatory control”, such as those materials that have been smuggled or stolen. Under DHS’s reorganization, there is no longer a specific directorate in charge of GNDA responsibilities, according to CWMD officials. However, CWMD officials said that GNDA responsibilities, such as identifying gaps in current nuclear detection capabilities, will be distributed throughout CWMD components. STC Program Goals and Phases CWMD initiated the STC program with three primary goals: (1) enhance regional capabilities to detect and interdict unregulated nuclear and other radiological materials, (2) guide the coordination of STC cities in their roles defined by the GNDA, and (3) encourage participants to sustain their nuclear or radiological detection programs over time. According to the Program Management Plan, for each city, the STC program consists of three phases that provide for the development, integration, and sustainment of nuclear or radiological detection capability by cities to support state, local, and tribal operations. Phase 1: Development of initial operating capability. CWMD provides a mechanism for cities to develop initial operating capability to detect and report the presence of nuclear or radiological materials that are out of regulatory control. During phase 1, efforts focus on satisfying the immediate needs of state and local agencies in developing detection and reporting capabilities. This phase of the implementation is expected to take 3 years. Phase 2: Integration. CWMD provides additional resources to cities to allow them to develop enhanced detection, analysis, communication, and coordination functionality. These resources build on the integration of state and local capabilities with U.S. government activities and the GNDA that existed prior to cities’ participation in the STC program or were established during phase 1. This phase is expected to take about 2 years. Phase 3: Sustainment. CWMD provides indirect support to cities to sustain their capabilities. CWMD maintains a relationship with local program operators through assistance with alarm response and subject matter expertise. For example, it provides advice to cities on training, practice exercises, and questions as they arise. As of March 2019, Chicago and Houston are in phase 1 of the program, the National Capital Region is in phase 2, and New York—New Jersey and Los Angeles—Long Beach are in phase 3. How the STC Program Operates The STC program operates as a cooperative agreement between CWMD and eligible cities. Accordingly, a substantial amount of interaction is expected between CWMD and program participants. A full cooperative agreement package for the STC program includes a notice of funding opportunity, notice of financial assistance award (assistance award), and general guidance documents for the program. It also includes requirements for cities to develop performance metrics for achieving key program tasks, such as purchasing equipment and conducting training, and to submit quarterly financial and performance reports. CWMD seeks applications for the program through a notice of funding opportunity, which lays out eligibility criteria and other requirements. According to CWMD officials, after New York—New Jersey was accepted into the STC program, CWMD opened up eligibility for the program to cities in DHS’s Urban Area Security Initiative (UASI) identified as having the highest risk for a terrorist attack. In the application process, one local government entity applies as the principal partner for the city (e.g., the New York Police Department is the principal partner for New York— New Jersey). Once CWMD accepts a city into the program, the city receives an assistance award, which details the approved budget for the year and may include an approved purchase plan. DHS prefers that a lead agency within the city distributes funds or any equipment purchased with program funds to the other state and local partners, such as police departments of neighboring jurisdictions, fire departments, or public health officials, among others. According to CWMD officials, every year cities in the program must apply for the next increment of funding from the program; if a city’s application is approved, it receives an amendment to its assistance award. There is a 5-year period of performance— corresponding to phases 1 and 2—under which the cities are eligible to receive and obligate funding. CWMD officials told us that they can grant an extension to cities to obligate the funds if they have not been able to do so within the original 5-year period. In phase 3 of the program, CWMD may provide technical assistance or subject matter expertise to cities but no further funding. STC Program Activities Cities in the STC program may spend their funds on nuclear and radiological detection equipment, training, and administrative program costs, among other things. Several types of detection equipment may be approved for purchase. Personal radiation detectors (PRD) are wearable radiation detectors, approximately the size of a cell phone. When exposed to elevated radiation levels, the devices alarm with flashing lights, tones, vibrations, or combinations of these. Most PRDs numerically display the detected radiation intensity (on a scale of 0 to 9) and thus can be used to alert the officer of a nearby radiation source. However, they typically are not as sensitive as more advanced detectors and cannot identify the type of radioactive source. Radiation detection backpacks are used for primary screening and for conducting wide area searches, according to CWMD officials. These officials said the size of the detector contained within the backpack allows the operator greater detection sensitivity as compared to a PRD. CWMD officials also said these devices are especially useful for screening a large venue for radiological materials prior to occupancy by the public. Radiation isotope identification devices are radiation detectors that can analyze the energy spectrum of radiation, which enables them to identify the specific radioactive material emitting the radiation. Such devices are used to determine if detected radiation is coming from a potential threat or from naturally occurring radioactive material, such as granite. Mobile detection systems contain larger detectors. Typically, mobile detection systems interface with a laptop computer to display alarms and analysis, and are capable of both detection and identification. This type of system may be mounted on vehicle platforms, such as cars, trucks, vans, boats, or helicopters. Figure 2 shows examples of such equipment. Such equipment and associated training are the basis for the capability provided through the STC program. Officials we interviewed in one STC city told us that in order to operate the equipment, law enforcement, fire, health, and other state and local personnel must take training on the process for screening and for resolving alarms related to suspected nuclear or radiological material. As shown in figure 3, primary screening is the first step of the process: if an officer is able to determine the source of the alarm and deems it a nonthreat, then the case is resolved. According to CWMD officials, PRDs often detect nuclear or radiological materials that do not actually pose threats, such as radiation from medical treatments and from naturally occurring substances such as granite. An officer who is not able to determine the source of the alarm should initiate a secondary screening process; according to CWMD officials, secondary screening varies by locality. Officers with advanced training conduct secondary screening by using equipment such as radiological isotope identification devices to identify the type of source material detected. If, after secondary screening, officers still suspect a threat, they can contact technical “reachback,” which is a system that puts officers on the ground in communication with off-site specialists and resources. This technical reachback can provide greater expertise, including the ability to analyze the energy spectrum detected during screening and improve identification of the source and nature of the potential threat. CWMD officials said that the technical reachback may occur at the state and local or national level. State and local technical reachback procedures may vary, but national level technical reachback is standardized with 24-hour call centers run by the Department of Energy or U.S. Customs and Border Protection. According to CWMD officials, at any point in the screening process, if a secondary screening device is utilized, it is standard protocol for the officer to alert the FBI of the incident. If a threat is suspected, the FBI can deploy a team that is trained to respond to such a threat. DHS Does Not Collect Information to Fully Track Cities’ Use of STC Funds for Approved Purposes and Assess Cities’ Performance DHS’s CWMD does not collect information to fully track cities’ use of STC funds for approved purposes and to assess the cities’ performance in the program. Specifically, CWMD tracks cities’ spending using program funds and some performance data through quarterly reports that it collects from cities, but does not collect other key data to track itemized expenditures and to assess how effectively cities achieved key performance metrics and program milestones or how they performed in exercises or drills that simulate a nuclear or radiological threat. CWMD Tracks Some Spending Data but Does Not Collect Data to Ensure That Funds Are Spent as Approved CWMD tracks cities’ spending using program funds through quarterly financial reports it collects from cities, according to CWMD officials, but does not collect other key data to ensure that funds are spent for approved purposes and not spent on unrelated program activities. Specifically, CWMD provides each city eligible for additional funding an assistance award every year that includes an approved budget for spending categories such as program staff and equipment, but CWMD officials told us that CWMD does not track itemized expenditures to ensure that program funds were spent according to this budget. According to CWMD’s program agreements with cities, cities must have written approval from DHS in advance of spending obligated program funds for all equipment purchases in the amount of $5,000 or more per unit cost. However, CWMD officials told us that because of time and resource constraints, they do not collect data that cities maintain in their internal systems on the expenditures they actually made with program funds, even though CWMD’s program agreements with cities typically specify that CWMD or DHS’s Grants and Financial Assistance Division (GFAD) may access these data at any time. Furthermore, although GFAD officials told us that CWMD, in conjunction with the Grants Officer at GFAD, has the authority to conduct programmatic and financial audits and site visits to cities, these audits are infrequent and limited in their ability to ensure that cities’ expenditures were in accordance with CWMD’s approved purchase plans, which take into account program goals and objectives. According to these officials, in the program’s history, GFAD has conducted a total of two desk audits in two STC cities—New York—New Jersey and Los Angeles—Long Beach. GFAD initiated these two audits in 2015 and, according to GFAD officials, examined a small random sample of purchases. GFAD officials said they do not currently plan to conduct any additional audits in STC cities because of resource constraints. The extent of CWMD’s tracking of cities use of STC program funds is not consistent with federal internal control standards, which state that program management should design control activities to achieve objectives, such as comparing actual performance to planned or expected results and analyzing significant differences. However, according to CWMD officials, CWMD does not compare information on expenditures to cities’ approved purchase plans. As a result, DHS does not know the dollar amounts cities actually spent on program purchases. By regularly collecting detailed information from cities on expenditures made using program funds and comparing that information to approved purchase plans, CWMD would have greater assurance that cities spent funds as approved and that the expenditures are in keeping with program goals and objectives. Because CWMD does not regularly collect or maintain data on how cities spent program funds, we requested that it ask cities for these data and provide them for our review. Table 1 summarizes STC program funds obligated to and spent by each city and shows that New York—New Jersey spent about three-quarters of all STC funds—about $110 million of the $145 million cities spent as of June 30, 2018. As discussed above, New York—New Jersey was the pilot city for the program and was not subject to the $30 million limit on program funding. In addition to program funds, CWMD provided cities with nonmonetary assistance in the form of training, among other things. These data also show that cities spent most STC funds on equipment purchases. Specifically, about two-thirds of STC funds spent were for equipment to detect nuclear or radiological threats—about $95 million of the $145 million spent. Among the four cities that have purchased equipment, the largest equipment purchase category was PRDs, at over $40 million. Cities also reported purchasing equipment such as backpacks that contain radiation detectors; radiation isotope identification devices, which identify the type of radiation that is emitted from a source; and mobile systems that detect radiation from a vehicle on the ground or in the air. In addition, cities spent STC funds on training, staff, and contracts for training and other services, according to the data. Collectively, cities spent about 6 percent of program funds on training, 3 percent on staff, and 14 percent on contracts for training and other services. (See table 2.) CWMD Tracks Some Performance Data but Does Not Collect Data to Ensure That Performance Metrics and Program Milestones Are Achieved CWMD tracks some performance data in quarterly reports it collects from cities, but it does not collect data to ensure that key performance metrics and program milestones identified in the Program Management Plan are achieved. For example, the quarterly reports CWMD collects from cities show the quantities of equipment, by type, that cities purchased with STC funds over the course of the program (see table 3), but these reports do not show whether the quantities of equipment met cities’ targets for equipment purchases. In addition, these reports do not show how much cities spent to purchase equipment for the program. CWMD’s notices of funding opportunity require cities to identify and submit key performance metrics for measuring progress against their objectives and a schedule of program milestones as part of their application to the STC program. According to the CWMD officials, each STC city submitted a Gantt chart—which plots planned activities over time—as part of its initial application. However, over the course of the program, CWMD found this tool had limited value and later gave each city the latitude to manage its program timeline as it deemed appropriate. In addition to the Gantt charts, CWMD officials said they provided cities with templates to develop checklists to document their progress against their objectives and compare their progress to planned actions. However, CWMD officials told us that they view this checklist as a guide to help cities plan rather than a firm program requirement, and cities have not submitted these checklists. Until CWMD requires cities to submit checklists or equivalent information on their progress in the STC program, it will not have complete information on how cities are performing compared to the key performance metrics and program milestones they identified for themselves. CWMD Does Not Consistently Collect Information on How Cities Performed during Drills and Exercises CWMD does not consistently collect information on how cities performed during STC program-funded exercises and drills that test cities’ ability to detect a simulated nuclear or radiological threat. CWMD’s notices of funding opportunity entered into after 2007 generally state under program performance reporting requirements that cities must submit operational reports, such as exercise after-action summaries. CWMD officials told us that they have provided STC cities with a template for preparing after- action reports—which assess a city’s performance during an exercise and include improvement plans following exercises that the program funded. These reports and plans could provide greater insight than quarterly performance reports on the effectiveness of cities’ capabilities. Nonetheless, available performance data show that CWMD did not enforce this requirement and that cities have submitted very few after- action reports. In their quarterly performance reports, the four cities other than New York—New Jersey reported completing 231 drills and exercises but only five after-action reports and one improvement plan. Officials from New York—New Jersey, whose performance reporting requirements differ from those of other cities according to CWMD officials, said that they complete over 100 drills and exercises per year but do not complete after-action reports because of the amount of paperwork that would be required. CWMD officials said that they did not enforce the requirement to submit after-action reports and improvement plans because they felt they could not force cities to report this information. Officials also told us that even though cities are aware of requirements in CWMD’s notices of funding opportunity to provide these reports and plans, cities may be reluctant to complete them because they could highlight weaknesses in their capabilities. We have previously found that a leading practice to promote successful data-driven performance reviews includes participants engaging in rigorous and sustained follow-up on issues identified during reviews. Until CWMD more fully assesses cities’ performance by consistently enforcing reporting requirements on how cities performed during exercises, it cannot assess the extent to which cities could effectively detect or deter a nuclear or radiological threat. DHS Does Not Have Assurance That Cities Can Sustain Capabilities Gained through the Program, and Cities Face Funding Challenges DHS’s CWMD does not have assurance that cities can sustain threat detection and deterrence capabilities gained through the STC program, and cities anticipate funding challenges once STC program funding ends. Specifically, CWMD has not enforced sustainment planning requirements and has taken limited action to help cities sustain their capabilities, even though encouraging sustainment is one of its primary program goals. Cities anticipate funding challenges that will adversely affect their ability to sustain capabilities after the program. CWMD Has Not Enforced Sustainment Planning Requirements and Has Taken Limited Action to Help Cities Sustain Capabilities CWMD identified a key goal related to sustainment of cities’ nuclear or radiological detection program overtime in its Project Management Plan and requires cities to plan for sustainment. However, CWMD has not enforced sustainment planning requirements and has taken limited action to help cities sustain capabilities. CWMD’s program agreements generally require cities to submit plans describing how they will sustain capabilities gained through the program. For example, some of CWMD’s program agreements state that these sustainment plans must (1) explain how the city will support and sustain STC capabilities after completing the program, (2) describe potential sources of future financial support, and (3) commit to obtaining future financial assistance beyond CWMD support. However, CWMD accepted sustainment plans from four cities that did not identify how they will sustain capabilities once program funding ended. Each of the cities’ plans clearly state that they will have difficulty sustaining the program without additional federal funds. (See fig.4.) We also found that three of the four sustainment plans submitted to CWMD provide little detail about the specific equipment or training cities expect they will need after program funding ends. CWMD, however, did not take steps to address these concerns because CWMD officials said that they viewed finding alternative sources of funding to sustain capabilities as the cities’ responsibility. CWMD officials told us that they provide some ongoing technical assistance to cities in the sustainment phase of the program, but this assistance does not include additional funding. Thus far, New York—New Jersey is the only city of the two cities in the sustainment phase that has received technical assistance. Furthermore, CWMD did not consistently take steps to ensure that cities planned for sustainment when making purchasing decisions. As previously noted, program agreements generally require sustainment plans. Under CWMD’s Project Management Plan, CWMD expects cities to submit those sustainment plans to CWMD within 24 months of their initial award date. However, New York—New Jersey and Los Angeles— Long Beach did not submit their sustainment plans until many years after they began to receive STC funding. New York—New Jersey, for example, did not submit a draft sustainment plan until 2015, nearly 8 years after the city initially received funding because CWMD did not include a sustainment plan requirement for the city until its award for fiscal year 2011 and allowed 36 months to complete a sustainment plan. Similarly, Los Angeles—Long Beach did not submit a draft sustainment plan until 2017—5 years after the city initially received funding. In its program agreement with Los Angeles—Long Beach, CWMD required that a sustainment plan be submitted within 18 months of the award date, but CWMD did not enforce this requirement and accepted a sustainment plan from Los Angeles—Long Beach that was significantly delayed. It is unclear whether New York—New Jersey and Los Angeles—Long Beach ever finalized their draft sustainment plans. CWMD identified sustainment as a program goal but has not enforced its own requirements related to this goal or taken steps to analyze the risks sustainment challenges pose to its program’s success. Federal internal control standards state that program management should identify, analyze, and respond to risks related to achieving the defined objectives. Unless CWMD analyzes risks related to sustainment, works with cities to address these risks, and enforces sustainment planning requirements for cities that join the program in the future, program participants could see their radiological detection programs and related capabilities deteriorate over time. Cities Anticipate Funding Challenges to Sustaining Capabilities Officials from all five cities raised concerns to us about their ability to maintain capabilities over time without a dedicated source of funding once STC program funding ends. For example, New York—New Jersey officials told us that they informed CWMD they would not be able to maintain capabilities past 2021 without additional funds. Houston conducted an analysis of the funds needed to sustain the program and estimated that it would generally need over $1 million per year, primarily to replace equipment. City officials also said that they are already experiencing challenges that will have implications for funding and sustainment of the program. For example, Chicago officials said they are facing challenges regarding funding for training. These officials said CWMD told them that the company that conducted training in the other STC cities—at no cost to those cities—will no longer be the designated training entity. But a new training company has not been put in place. CWMD has not communicated a new plan for training Chicago’s officers on equipment that has already been purchased, and Chicago officials told us that they do not have additional funds to purchase training. Chicago officials said that if they do not receive future years of funding to conduct training on the already-purchased equipment, their planned capabilities could go to waste. According to several city officials, cities cannot rely on other DHS grant programs or federal grant programs or local sources of funding to sustain the STC program. Specifically, the officials said that cities’ ability to obtain funds from DHS’s UASI for sustainment may be limited, in part because of ineligibility by some partner agencies within an STC city. For example, law enforcement agencies in Santa Ana, California, received support from the STC program as part the Los Angeles—Long Beach city region, but they would not be eligible for UASI funds because Santa Ana is not in the Los Angeles—Long Beach UASI region. Moreover, UASI funds may not be sufficient to meet demand from cities. Houston city officials said that in fiscal year 2017, the city had requested $40 million in UASI funds from the UASI Committee, which distributes UASI funds in each city. But the committee had only $23 million to disperse to Houston. According to CWMD officials, other DHS grant programs within the Federal Emergency Management Agency—such as the Homeland Security Grant Program— may not provide a guaranteed source of consistent funding. Further, CWMD, NNSA, FBI, and city officials that we interviewed said they were not aware of any other federal grant program that cities could utilize to sustain nuclear or radiological detection capabilities. At a local level, several city officials said that there are competing funding priorities, such as preventing school shootings and addressing the opioid crisis, that require more money and attention because they affect the local community more directly every day. DHS Has Not Fully Developed or Documented Potential Program Changes, Including the Basis for Making Changes, or Communicated Their Impact on Current STC Cities DHS has not (1) fully developed potential changes or documented a plan for making changes to the STC program; (2) identified the basis for such changes; and (3) clearly communicated with the cities, raising concerns about how the changes will impact them. CWMD Has Not Fully Developed or Documented Potential Changes to the STC Program and Does Not Have a Strategy or Plan for Implementing Them CWMD officials told us that the agency is considering several potential changes to the STC program that would broaden its geographic reach and scope, but it has not fully developed or documented these changes and does not have a strategy or plan for implementing them. According to these officials, CWMD has not made any final decisions about potential changes and therefore has not developed any formal strategic documents. Based on our interviews with CWMD and city officials and some limited information in DHS’s fiscal year 2019 budget justification, we found that CWMD is considering making the following changes to the STC program: New program goals. CWMD officials told us that the STC program’s new goals would be to (1) enhance regional capabilities to detect, analyze, report, and interdict nuclear and other radioactive threats; (2) provide defense in large geographic regions; and (3) maximize deployment of detection equipment to nonfederal agencies to support federal nuclear detection priorities. The first program goal is one of the original program goals. However, CWMD officials said that under this proposal, CWMD would no longer include encouraging cities to sustain capabilities over time as a program goal because CWMD has discussed centralizing acquisition of detection equipment. Expansion of the program’s geographic coverage. Although legacy cities would still receive support under the new version of the STC program, CWMD officials said that the new program would provide national coverage and would include detection and deterrence activities in regions well outside of cities that UASI identified as having the highest level of threat and risk for a terrorist attack. Prior to proposing this change, CWMD had included in DHS’s fiscal year 2018 budget justification its intent to select a sixth and seventh city to participate in the program by the end of fiscal year 2018, which CWMD officials told us did not occur. In DHS’s fiscal year 2019 budget justification, CWMD stated its intent to support the development of nuclear or radiological detection capability for broader regions. Centralized acquisition of detection equipment. Instead of providing funding to STC cities to purchase detection equipment directly, CWMD officials told us that they would plan to centralize the acquisition process and purchase equipment on behalf of cities and regions. CWMD officials told us that they expect most of this equipment to be PRDs. A greater role for other agencies. CWMD officials said that although the STC program would remain a CWMD-only program, CWMD expects to work closely with the FBI, NNSA, and other DHS components, such as the U.S. Coast Guard and U.S. Customs and Border Protection, to detect and deter nuclear or radiological threats. Currently, according to CWMD officials, CWMD is working with the FBI and NNSA on a Domestic Detection Concept of Operations to coordinate their capabilities and functions. In addition, CWMD officials said that they plan to align the STC program with the existing FBI stabilization program, which responds to nuclear or radiological threats that have been detected. According to CWMD officials, CWMD would rely on FBI-led stabilization teams for guidance on selecting and distributing detection equipment for the STC program. Each stabilization team would have a partner STC program office to test, calibrate, and distribute detection equipment and to train operators, and the STC program would provide funding to cities to maintain these offices. Inclusion of chemical and biological weapon detection and deterrence within the program’s scope. The Countering Weapons of Mass Destruction Act of 2018 includes chemical and biological weapon detection and deterrence under the scope of CWMD but limits the STC program to detecting and deterring nuclear or radiological threats. CWMD officials told us that they had planned to add chemical and biological detection and deterrence efforts to the STC program, but such a change would now require a statutory change. The changes that CWMD is considering making to the STC program would be significant in scope. However, CWMD officials confirmed that CWMD has not documented these potential changes for key stakeholders, such as cities or partner agencies or provided strategic documents to describe how it plans to implement any changes. FBI officials we interviewed said that although the FBI supports greater coordination between CWMD and FBI-led stabilization teams, these programs will remain distinct and independent, with separate and dedicated lines of funding and personnel. These officials also said that CWMD and the FBI will not share equipment or technicians. According to NNSA officials, there is no new role defined for NNSA in the STC program, although NNSA leadership has asked its Radiological Assistance Program to contribute to the STC program where possible. NNSA officials also said that NNSA and CWMD will continue to coordinate on how information flows at a federal level if a nuclear or radiological threat has been detected. CWMD officials told us that they first introduced potential program changes to five STC cities at a meeting in February 2018 and met with leadership from these cities in August 2018 to discuss these changes further. In November 2018, we contacted officials from the STC cities to determine whether they understood how the STC program would continue. Officials from the STC cities made statements that indicated confusion and uncertainty about the future of the program. For example: Officials from one city told us they believed that changes to the STC program would apply only to new cities joining the program, even though CWMD officials told us that the changes would affect all cities going forward. Officials in another city told us that they left the August meeting with the impression that the changes presented were only preliminary proposals up for discussion and that the program could evolve in any number of directions. However, documents CWMD provided to us during interviews show CWMD’s intention to make several of the specific changes described above, even though the agency’s proposals for the STC program have not yet been finalized. Officials in most cities told us they believed that CWMD may provide them separate funding under the new program for sustaining capabilities developed to date, but CWMD officials told us that no final decisions had been made regarding future support for legacy cities. Most city officials we interviewed said that the August meeting provided a high-level overview of potential changes and little detail on how such changes would be implemented or affect city operations. Our past work has discussed the importance of strategic planning. We have reported that, among other things, strategic plans should clearly define objectives to be accomplished and identify the roles and responsibilities for meeting each objective. By developing a written strategic plan (or implementation plan) for any potential changes to the STC program, CWMD would provide clarity on what specific changes are planned and how CWMD plans to implement them. For example, given the uncertainty around the future direction of the program, a written strategy would help shed light on the exact role that CWMD envisions for partner federal agencies and how it plans to utilize these partnerships to acquire and distribute equipment. In October 2018, we briefed staff on the Senate Committee on Homeland Security and Governmental Affairs and House Committee on Homeland Security on our ongoing work, including our preliminary findings on the benefits of (1) developing an implementation plan for potential changes to the STC program and (2) assessing the effect of changes on the program. The recent Countering Weapons of Mass Destruction Act of 2018, signed into law on December 21, 2018, requires that CWMD develop an implementation plan that among other things, identifies the goals of the program and provides a strategy for achieving those goals. The act requires CWMD to submit this implementation plan to Congress by December 21, 2019. In addition, the law requires a subsequent report assessing effectiveness and proposing changes for the program, which could provide clarity on how proposed changes would align with STC program strategy and how CWMD plans to implement them. CWMD is also required to consult with and provide information to appropriate congressional committees before making any changes to the STC program, including an assessment of the effect of the changes on the capabilities of the STC program. CWMD Has Not Identified a Clear Basis for Program Changes CWMD has not identified a clear basis for making program changes, and the extent to which these changes can be attributed to new priorities under DHS’s reorganization is unclear. CWMD officials told us that they have not conducted any studies or analyses that would justify making changes to the program. In DHS’s fiscal year 2019 budget justification, CWMD discussed the importance of using the STC program to build capabilities far outside the immediate target areas, (i.e., cities) and the need to detect threats along the air, land, or sea pathways into and within the country that terrorists could potentially use to reach their targets. However, according to CWMD officials, CWMD has not identified a change in the nature or level of nuclear or radiological threats to explain its intent to move from its original city-focused model for the STC program to a more national approach. In addition, as stated above, CWMD does not collect information to fully assess the performance of cities currently in the program and therefore does not have a performance-based rationale for changing its program goals. CWMD officials said that the uncertainty surrounding making changes reflect a program under transition within an agency under transition—that is, the reorganization from DNDO to CWMD. The Countering Weapons of Mass Destruction Act of 2018 requires that before making changes to the STC program, the Assistant Secretary of CWMD brief appropriate congressional committees about the justification for proposed changes. This briefing is to include, among other things, an assessment of the effect of changes, taking into consideration previous resource allocations and stakeholder input. This new requirement would provide DHS an opportunity to identify the basis for potential changes. Assessing such changes could provide more reasonable assurance that they would strengthen the program and not result in unintended consequences, such as reducing capabilities in current cities. CWMD Has Not Clearly Communicated with the Cities, Raising Concerns about How Potential Program Changes Will Impact Them CWMD has not clearly communicated with the cities currently in the STC program about the status of potential program changes, raising concerns among these cities about how the changes will impact them. Although CWMD officials told us that the STC program would still support cities currently in the program, CWMD has not communicated to cities the levels of funding or other resources they can expect to receive going forward under the new version of the program. Notably, CWMD has not explained how expanding the program’s geographical coverage would affect cities currently in the program, including any effect on the availability of resources for these cities. City officials told us that they had several concerns, including the following, about CWMD’s potential changes for the STC program: Ability to choose equipment that meets a city’s needs. Some city officials we interviewed expressed concerns that the potential changes could detract from their ability to decide which types of equipment and support would best meet their needs. For example, officials in one city expressed concern that their planned calibration laboratory, which is used to maintain equipment, could become obsolete if CWMD chose to distribute PRDs that differ from the type the city currently uses. Furthermore, some city officials questioned whether CWMD and local FBI-led stabilization teams could adequately assess the specific equipment needs of state and local partner agencies within current STC cities. FBI officials told us that they do not assess the equipment needs of state and local partner agencies, but instead share information with those partners should they wish to acquire similar resources in order to maintain state, local, and federal capabilities. Scope of the program. Several city officials said concerns arose when CWMD requested that STC cities test toxic compound meters in 2018, raising questions about the scope of the program. These devices are designed to detect the presence of certain chemical weapons, but the STC program does not include detecting or deterring chemical weapons. Therefore, several officials felt that the request to test the devices was outside the scope of their mission. CWMD officials said that although the meters were not connected with the STC program, it made sense to reach out to the STC cities as CWMD already had a relationship with the cities and they were deemed appropriate locations. Role of the FBI. Some city officials told us that they had heard from CWMD that the FBI could play an expanded role in secondary screening in the future, which they felt could be problematic because of the FBI’s limited staff presence in field locations. FBI officials we interviewed said that they did not plan to conduct additional secondary screening in the future; instead they plan to formalize the secondary screening process that is already in place in STC cities. According to FBI officials, the bureau would always respond to situations requiring a threat assessment. Effect on future funding, including for sustainment activities. CWMD recently informed National Capitol Region officials that they would not receive an expected fifth year of funding because of planned program changes. City officials said that this change came as a surprise to them and now they will only be able to buy approximately 90 percent of the equipment they had originally planned to purchase. In addition, these officials said that they planned to use much of the fifth year funding for sustainment activities, such as training classes, and that this loss would adversely affect their current sustainment plans. CWMD officials said that under the new program, CWMD will take responsibility for sustaining the nuclear or radiological detection equipment distributed to cities, but, as described above, these officials said that no final decisions have been made regarding future support for legacy cities. Several city officials said that CWMD had not adequately responded to their concerns and that there has been less communication from CWMD about the STC program since 2017 as a result of the DHS reorganization. Further, several city officials said that they expected CWMD to set up quarterly meetings with STC city leadership following the August meeting, but they had not received any notifications about additional meetings. CWMD officials told us that they intend to have more frequent meetings with STC city leadership in the future but were unable to schedule a meeting during the first quarter of fiscal year 2019. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity’s objectives. If CWMD does not clearly communicate to the cities how the existing program will operate until a new program is developed and implemented, these cities could face difficulties planning for the future and achieving the program’s detection and deterrence objectives. Conclusions DHS’s STC program has taken steps to address a top-priority threat to national security by providing high-risk cities with resources to develop nuclear or radiological detection capabilities. However, in implementing the program, CWMD does not collect key data to track itemized expenditures and to assess how effectively cities achieved key performance metrics and program milestones or how well they performed in exercises or drills that simulate a nuclear or radiological threat. By regularly collecting detailed information from cities on expenditures made using program funds and comparing that information to approved purchase plans, CWMD would have greater assurance that cities spent funds as approved, and consistent with program goals, and that the expenditures are in keeping with program objectives. In addition, until CWMD requires cities to submit checklists or equivalent information on their progress in the STC program, it will not have complete information on how cities are performing compared to the key performance metrics and program milestones they identified for themselves. Further, until CWMD more fully assesses cities’ performance by consistently enforcing requirements, as applicable, that cities report on how they performed during exercises, it cannot assess the extent to which cities could effectively detect or deter a nuclear or radiological threat. CWMD identified sustainment as a program goal but has not enforced its own requirements related to this goal or taken steps to analyze the risks sustainment challenges pose to its program’s success. Unless CWMD analyzes these risks, works with cities to address them, and enforces sustainment planning requirements for future cities, program participants could see their radiological detection capabilities deteriorate over time. CWMD officials told us that the agency is considering several potential changes to the STC program that would broaden its geographic reach and scope, but it has not fully developed or documented these changes and does not have a strategy or plan for implementing them. The Countering Weapons of Mass Destruction Act of 2018 requires that the Secretary of Homeland Security develop a strategy and implementation plan for the STC program and a subsequent report assessing effectiveness and proposing changes for the program, which could provide clarity on how proposed changes would align with STC program strategy and how CWMD plans to implement them. CWMD also has not provided a clear basis for proposed program changes. The act further requires that, before making changes, the Assistant Secretary of CWMD brief appropriate congressional committees about the justification for proposed changes, which should include an assessment of the effect of changes. This new requirement could help ensure that changes will strengthen the program and not result in unintended consequences, such as reducing capabilities in current cities. In the meantime, CWMD has not clearly communicated how its proposed changes will impact cities currently in the STC program, raising concerns among these cities about how the changes will impact them. If CWMD does not clearly communicate to the cities how the existing program will operate until a new program is developed and implemented, these cities could face difficulties planning for the future and achieving the program’s detection and deterrence objectives. Recommendations for Executive Action We are making the following four recommendations to CWMD: The Assistant Secretary of CWMD should ensure that the office regularly collects detailed information from cities on expenditures made using program funds and compares that information to approved purchase plans to ensure that these funds were spent as approved, consistent with program goals, and that the expenditures are in keeping with the objectives of the program. (Recommendation 1) The Assistant Secretary of CWMD should more fully assess cities’ performance by collecting information from cities on achieving key performance metrics and program milestones and enforcing reporting requirements on performance during exercises. (Recommendation 2) The Assistant Secretary of CWMD should analyze risks related to sustaining detection capabilities, work with cities to address these risks, and enforce sustainment planning requirements for future cities. (Recommendation 3) The Assistant Secretary of CWMD should clearly communicate to cities how the existing program will operate until a new program is developed and implemented. (Recommendation 4) Agency Comments We provided a draft of this product to DHS, the FBI, and NNSA for review and comment. In its comments, reproduced in appendix I, DHS concurred with our recommendations in the draft report. DHS identified actions it would take to address these recommendations, including revising quarterly reporting requirements to include detailed information on expended funds, performance metrics, program milestones, and exercise activities. In addition, DHS said it would engage with cities to procure and distribute equipment and to refurbish or replace it when appropriate, and would conduct on-site senior-level meetings with all current STC cities to continue discussions about new procedures, partnerships, and sustainment of capability. We believe these actions, if implemented as described, would address the intent of our recommendations. DHS also provided technical comments, which we incorporated as appropriate. The FBI and NNSA told us that they had no comments on the draft report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Secretary of Energy, the Assistant Attorney General for Administration of the Department of Justice, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Ned H. Woodward (Assistant Director), Keya Cain (Analyst in Charge), and Alexandra Jeszeck made key contributions to this report. Chris P. Currie, Pamela Davidson, R. Scott Fletcher, Juan Garay, Tom James, Benjamin Licht, Greg Marchand, Cynthia Norris, and Kiki Theodoropoulos also contributed to this report.
Why GAO Did This Study Countering the threat that a terrorist could smuggle nuclear or radiological materials into the United States is a top national security priority. In fiscal year 2007, DHS initiated the STC program to reduce the risk of the deployment of a nuclear or radiological weapon by establishing capability in state and local agencies to detect and deter such threats. Since the program began, five participating cities have spent almost $145 million in program funds. GAO was asked to review the STC program. This report examines (1) the extent to which DHS tracks cities' use of program funds and assesses their performance; (2) what assurance DHS has that cities can sustain capabilities gained through the STC program and the challenges, if any, that cities face in sustaining such capabilities; and (3) potential changes to the STC program and how DHS plans to implement them, the basis for these changes, and the extent to which DHS has communicated with cities about the impact of making changes. GAO reviewed DHS documents, conducted site visits to all cities in the program, and interviewed DHS and city officials. What GAO Found The Department of Homeland Security (DHS) does not collect information to fully track cities' use of Securing the Cities (STC) program funds for approved purposes and to assess their performance in the program. To reduce the risk of successful deployment of nuclear or radiological weapons in U.S. cities, the program establishes local threat detection and deterrence capabilities. DHS tracks cities' spending of program funds and some performance data through cities' quarterly reports but does not collect other data on itemized expenditures and to assess how effectively cities achieved performance metrics and program milestones or how they performed in drills that simulate a threat. For example, DHS does not compare information on expenditures to the purchase plans it approved for cities. As a result, DHS does not know the dollar amounts cities actually spent on program purchases. Expenditure data GAO requested show that cities spent most funds on detection equipment—that is, $94.5 million of the $144.8 million cities spent through June 30, 2018. By regularly collecting expenditure information from cities and comparing it to approved purchase plans, DHS could better ensure these funds were spent consistent with program goals. DHS does not have assurance that cities can sustain threat detection and deterrence capabilities gained through the STC program. DHS has not enforced planning requirements for sustaining those capabilities and has taken limited action to help cities do so, although encouraging sustainment is one of its primary program goals. Officials from the five cities in the program told GAO that they anticipate funding challenges that will adversely impact their ability to sustain capabilities over time. For example, several city officials said they cannot rely on other DHS or federal grant programs or local sources of funding once STC funding ends. Unless DHS analyzes risks related to sustainment, works with cities to address these risks, and enforces sustainment-planning requirements for cities in the program in the future, program participants could see their radiological detection programs and related capabilities deteriorate. DHS has not (1) fully developed potential changes or documented a plan for making changes to the STC program; (2) identified the basis for such changes; and (3) consistently communicated with cities, raising concerns about how the changes will impact them. DHS officials told GAO that the agency is considering several potential changes to the STC program that would broaden its geographic reach and scope and centralize acquisition of detection equipment, among other things, but it has not fully developed or documented these changes and does not have a strategy or plan for implementing them. A law enacted in December 2018 requires DHS to develop an implementation plan for the STC program. The law's requirements would provide DHS an opportunity to identify the basis for potential changes, and assessing such changes would provide more reasonable assurance that they would strengthen the program. Further, most city officials GAO interviewed said that in an August 2018 meeting, DHS provided a high-level overview of potential changes and little detail on how such changes would be implemented or affect city operations. If DHS does not clearly communicate to cities how the program will operate under potential changes, these cities could face difficulties planning for the future and achieving the program's detection and deterrence objectives. What GAO Recommends GAO is making four recommendations including that DHS regularly collect detailed information from cities on program expenditures; analyze risks related to sustainment, work with cities to address these risks, and enforce sustainment-planning requirements for cities in the program; and clearly communicate to cities how the existing program will operate until a new program is in effect. DHS concurred with GAO's recommendations.
gao_GAO-20-56
gao_GAO-20-56_0
Background TSA is responsible for implementing and overseeing security operations at roughly 440 commercial airports as part of its mission to protect the nation’s civil aviation system. Screening Technologies TSA is responsible for ensuring that all passengers, their carry-on bags, and their checked baggage are screened to detect and deter the smuggling of prohibited items, such as explosives, into the sterile areas of airports and onto aircraft. Agency procedures generally provide that passengers pass through security checkpoints where their person, identification documents, and carry-on bags are screened by transportation security officers (TSO). TSA uses a variety of screening technologies—screening systems, as well as software and hardware for those systems—to carry out its mission. Figure 1 depicts the various screening technologies a passenger may encounter in primary and secondary security screening. Process for Acquiring and Deploying Screening Technologies TSA develops detection standards that identify and describe the prohibited items—such as guns, knives, military explosives, and homemade explosives—that each technology is to detect during the screening process. The standards, which are classified, also identify how often the technology should detect prohibited items (referred to as the required probability of detection) and the maximum rate at which the technology incorrectly identifies prohibited items (the probability of false alarm). For explosive materials, the standards also identify what the screening technology is to be able to detect in terms of (1) the minimum amount or weight of the material (the minimum detection mass) and (2) the chemical and physical makeup of the material (density range of the explosive material). S&T supports TSA in the development of standards by, among other things, analyzing the characteristics (threat mass, or the amount of material that constitutes a threat, and density) of explosive materials. The agency uses the resulting data to develop detection standards that are specific to each screening technology. After a detection standard is approved, TSA decides whether to operationalize—put into effect—detection standards by acquiring and deploying technologies to update detection capabilities to meet the standard. That is, it decides whether to take steps to develop new technology capable of meeting the standard and put the new technology in place at commercial airports. Technology can mean new software to upgrade existing screening systems as well as entirely new screening systems. TSA does not always or immediately operationalize detection standards, for reasons which are explained later in this report. To operationalize a detection standard, TSA must acquire technology capable of meeting the standard. TSA officials told us they follow DHS acquisition policies and procedures when acquiring new screening technologies. Officials said they adapt detection standards as detection requirements to guide the acquisition process, meaning the specifications described in the standards are incorporated into the requirements manufacturers must meet when developing new technology. Once manufacturers have developed new technologies that meet detection requirements, the technologies undergo a test and evaluation process, known as the qualification process. The following are key steps in that process: 1. Certification – Certification is a preliminary step in TSA’s qualification process. For TSA to certify that a screening technology meets its detection requirements, S&T’s Transportation Security Laboratory conducts certification testing on a manufacturer’s initial submission of its proposed screening technology to determine whether it meets TSA’s detection requirements (i.e., the rate at which it must accurately detect each category of explosive it is designed to detect, among other things). 2. Integration/Implementation Testing – TSA Systems Integration Facility administers qualification testing to test system performance against additional requirements, such as reliability, availability, and maintainability. TSA also conducts field testing to ensure readiness for operational test and evaluation. 3. Operational Test and Evaluation - TSA deploys units to selected airports to conduct operational testing. Operational testing allows TSA to evaluate the operational effectiveness, suitability, and cyber resiliency of the technology in a realistic environment. After new technologies have been tested and approved, TSA can purchase and deploy them to commercial airports. When a deployed screening system can no longer be updated to meet new detection standards, TSA considers it obsolete and generally designates it for replacement with a newer version of the technology. Figure 2 shows TSA’s process for acquiring and deploying new screening technologies to meet detection standards. DHS Risk Management DHS guidance provides that its components, including TSA, use risk information about security threats and analysis to inform decision-making. Risk management helps decision makers identify and evaluate potential risks so that actions can be taken to mitigate them. DHS defines a risk assessment as a function of threat, vulnerability, and consequence. DHS guidance also says that risk assessments and transparency are key elements of effective homeland security risk management. TSA Has a Process for Developing Detection Standards, but Has Not Updated Its Guidance or Documented Key Decisions TSA Has Consistently Followed Testing Protocols in Developing Detection Standards TSA has a process to develop new explosives detection standard in response to emerging, credible threats involving a homemade explosive (see sidebar for more information on homemade explosives). According to TSA officials, the first step in the process is to determine whether a new detection standard is needed, which they do by working with S&T and other federal partners to ”characterize” the threat material—that is, identify the chemical and physical properties of the material, such as the threat mass and density. Below is the process (steps) TSA and S&T officials told us they use to characterize a threat material and determine whether a new detection standard is needed. Homemade Explosives Homemade explosives are designed to cause destruction when used in improvised explosive devices. The picture below shows damage to an aircraft panel from a homemade explosive. Beginning in the early 2000s, homemade explosives replaced military and conventional explosives as the preferred tool of terrorists, and challenged the capabilities of existing screening technologies. Unlike conventional threats, homemade explosives are often made of common commercial items and it can be challenging to distinguish them from innocuous gels and liquids stored in personal baggage or cargo. They also have different detonation patterns from conventional explosives in that they often release energy much more slowly, which may lead to incomplete or delayed detonation. This pattern is not well understood, which makes it much more difficult to predict the resulting damage. of the explosive—the minimum amount of the material that constitutes a threat to civil aviation. Material down selection (selection of possible mixtures for testing). Because the exact formulation of the explosive can vary, S&T must test and model various formulations in different proportions to gain an understanding of the homemade explosive. In this step, TSA determines the representative formulations and preparations that are to be prepared and tested, based on data provided by S&T. Synthesis, formulation, and preparation of materials. S&T establishes how the threat material could be made, including its chemical synthesis (as applicable), possible formulations or mixtures of the material with other components, and the preparation of those mixtures. S&T uses this information to develop samples of the threat material for testing. Data acquisition and analysis. S&T examines the samples using micro- computed tomography and explosives detection system, and the resulting data are sent to S&T’s Transportation Security Laboratory for verification. The verified data are then sent to the U.S. Department of Energy’s Lawrence Livermore National Laboratory for analysis. The Transportation Security Administration and the Science and Technology Directorate have ranked 300 conventional and homemade explosives that pose the most likely threat to aviation security based on factors such as availability, stability, performance, and method of initiation. Region of responsibility. Lawrence Livermore National Laboratory generates preliminary results in the form of the “region of responsibility,” which is a map or explosive detection “window” outlining the characteristics of the threat material in terms of density and effective atomic number. These preliminary results are discussed among TSA and S&T stakeholders, with TSA determining the final region of responsibility. The region of responsibility data are used to develop software algorithms that will allow screening technologies to recognize explosive materials whose characteristics fall within the region of responsibility. Detection standard. TSA and S&T also use the region of responsibility data to determine whether the explosive material can already be detected by deployed screening technologies. If screening technologies can already detect the material, TSA will not contract with technology manufacturers to develop a new software algorithm or screening technology. But regardless of whether a new software algorithm or new technology is needed, TSA will draft a new detection standard for the material that, generally, will specify the minimum threat mass and density range to be detected, the acceptable probability of detection, and probability of false alarm. The draft standard is reviewed by TSA senior management before being approved. We found that the work S&T and other stakeholders performed to characterize explosive threat materials was consistent across the threat materials. Specifically, we found that S&T consistently followed the process described to us (as outlined above) for characterizing a threat material in the seven material threat assessments we reviewed. We also reviewed documentation regarding additional testing and analysis S&T performed on select threat materials, and found the additional testing and analyses were performed consistently. TSA Has Not Updated Its Guidance for Developing Detection Standards to Reflect Required Procedures, Key Stakeholder Roles, and New Organizational Structure TSA has not updated its 2015 guidance for developing new detection standards to reflect key changes in their procedures. In December 2015, TSA issued the Detection Requirements Update Standard Operating Procedure, which a senior official told us served as the agency’s approved guidance for developing detection standards. Our review of the document found that, as of August 2019, it did not accurately reflect (1) designated procedures for developing detection standards, (2) the roles and responsibilities of key stakeholders such as S&T, and (3) TSA’s organizational structure. For example, one way in which the 2015 guidance has not been updated is in the designated procedures it describes for reviewing available intelligence information. Specifically, the guidance calls for an annual assessment of emerging threats, which a senior TSA official told us TSA no longer conducts because relevant emerging threats are now occurring more frequently and intelligence information is processed on an ongoing basis. In another example, the guidance specifies that TSA will form working groups composed of agency officials and stakeholders to assess potential threat materials and develop an analysis plan, and that each working group will define the roles and responsibilities of its members. According to a senior TSA official, the agency does not convene working groups to assess intelligence or develop an analysis plan, although officials regularly meet with stakeholders to discuss the steps needed to characterize new threat materials and document the minutes from these meetings. Finally, while the guidance discusses in detail which TSA offices and management positions are responsible for implementing and overseeing the process, the agency has since reorganized and these offices and positions no longer exist. Therefore, the 2015 guidance is no longer relevant in terms of which offices and positions are responsible for implementing and overseeing the approval of detection standards. Officials told us that, as of August 2019, they had begun revising the guidance to reflect existing standard operating procedures for developing detection standards, but had yet to finalize a draft of the new guidance or document plans or timeframes for completing and approving it. Further, it is not clear to what extent the revised guidance will address designated procedures for developing detection standards, the key roles and responsibilities of stakeholders, and TSA’s new organizational structure. Officials said they had not updated the guidance earlier because both TSA and S&T had been undergoing agency reorganizations. Standards for Internal Control in the Federal Government provides that agencies should identify, on a timely basis, significant changes to internal conditions that have already occurred, such as changes in programs or activities, oversight structure, and organizational structure. Additionally, agencies are to develop and maintain documentation of internal controls, such as policies and procedures necessary to achieve objectives and address related risks. By documenting the processes and procedures TSA uses to develop detection standards, clarifying the roles and responsibilities of stakeholders, and documenting organizational changes, TSA could have better assurance that detection standards are developed in accordance with established policies and practices. TSA and S&T Did Not Document All Key Decisions Regarding the Development of Detection Standards Our review of TSA’s steps to develop detection standards from fiscal years 2014 through 2018 found that TSA and S&T did not document all key decisions—those that could potentially affect outcomes—regarding the testing and analyses (characterization) of explosive threat materials and the development of explosives detection standards. We found that TSA and S&T produced a series of detailed material threat assessments to document the characterization of threat materials and consistently developed action memos to justify proposed detection standards. However, we also found that in five of the seven material threat assessments we reviewed TSA and S&T did not consistently document key steps in the testing and analyses of materials, such as how selected samples were prepared for testing. For example, one S&T material threat assessment we reviewed did not document the method used to synthesize (chemically produce) material samples used for testing. Not documenting the method could prevent officials from fully understanding the results of the analysis. Specifically, the assessment noted that there are multiple methods of synthesis, and that the chosen method could affect the makeup of the resulting material and therefore the ability of the screening technologies to detect it. Additionally, while two of the seven material threat assessments cited standard operating procedures for sample preparation for all participating laboratories, three did not cite standard operating procedures for at least one laboratory and two stated that sample preparation information had not been provided by one or more of the participating laboratories. Without documentation, TSA might not have all the necessary information to address future issues involving detection of these materials. We also found four instances in which TSA did not clearly document why select materials were sent for additional testing or did not document key decisions regarding the development and consideration of detection standards. For example, S&T performed additional testing and analysis on select threat materials after the material threat assessment was finalized. However, the documentation of this additional testing left out key elements regarding how and why the additional testing was needed and conducted. The action memo documenting new standards based on the results of the additional testing did not include a justification for why specific threat materials were selected for additional data collection. While a test plan for equivalency testing of one material stated that the additional testing was conducted because data reported in the literature were not considered representative of current threat configurations, similar justification was not included in the action memo justifying changes to the new standard based on the additional testing. Finally, a senior TSA official told us he requested the additional equivalency testing because the values in the previous detection standards appeared to be more conservative than expected and there was no documentation explaining how TSA had arrived at those numbers. According to the official, the previous detection standard was approved before his tenure and the determining officials were no longer with TSA. He also stated that he did not know whether TSA required documentation of testing and analysis when the previous detection standard was being developed. We found that TSA did not document key decisions regarding the development and consideration of detection standards. For example, officials could not provide documentation of conclusions reached on specific key decisions, such as the consideration and decision not to approve a proposed explosives trace detection standard. A senior TSA official said he did not know why the decision had not been documented because the officials involved were no longer with the agency. According to Standards for Internal Control in the Federal Government, documentation is required for the effective design, implementation, and operating effectiveness of an agency. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parties. By documenting key decisions regarding the development of detection standards, including instances in which draft standards are not approved, TSA could better ensure that effective decisions are made and that organizational knowledge is retained regardless of changes in personnel. TSA Operationalizes Detection Standards by Updating its Screening Technologies, Which Can Take Years to Complete TSA officials said one way to operationalize detection standards—acquire and deploy technologies to update detection capabilities and meet the detection standard—is to update existing screening systems with new technology, such as software or firmware. When possible, the agency installs software as part of routine maintenance. TSA can also deploy new hardware or replace screening systems to update detection capabilities. According to officials, the agency applies an incremental approach to updating existing screening technologies—it updates technologies when manufacturers are able to develop the requisite capabilities and as resources allow—which can take years to complete. According to officials, all fully deployed TSA screening technologies had detection capabilities that met detection standards approved from 2006 through 2012. That is, as of August 2019, TSA’s fleet of screening technologies met detection standards that were approved in 2012 or earlier. For example: Bottled liquid scanner units met a detection standard that was Advanced technology x-ray units met two detection standards, depending on their manufacturer, that were both approved in 2010; and Explosives trace detection units met a detection standard that was approved in 2012. Further, for each screening technology, the agency has approved two to three new detection standards that have not been operationalized, as of August 2019. For example, in addition to the 2006 detection standard for bottled liquid scanner, TSA approved standards for bottled liquid scanner in 2012 and in 2017 that have not been operationalized. TSA officials said they were working to operationalize some of the detection standards approved since 2012. Officials said they were working with manufacturers to develop new technologies to operationalize some of these standards. In other cases they were in the process of deploying new technologies that meet these standards. For example, as of August 2019, TSA was in the process of updating and replacing explosives detection systems to meet a detection standard that was approved in 2014. Officials said they expected to have the entire fleet updated by September 2023. TSA officials said they were also in the process of updating deployed advanced technology x-ray units for one of its two manufacturers to meet a standard that was approved in 2014. For more information about the detection standards TSA had approved for each technology as of August 2019, and the status of TSA’s progress in operationalizing them, see appendix I. TSA shares information about the capabilities it needs with manufacturers through requests for proposal, requests for information, and broad agency announcements. The agency places approved technologies on a qualified products list—a list of technologies that have been tested and certified as meeting requirements by TSA and DHS—and the agency can then award a contract to one of the manufacturers to purchase and deploy the technology. Before deploying technologies to airports, TSA conducts testing to ensure consistency in the manufacturing process, system configuration, and functionality following production, and then again after the technology is installed at airports. Our analysis of the acquisition information TSA provided found it took from 2 to 7 years to fully develop, certify, test, and deploy screening technologies to airports. For example, when operationalizing explosives trace detection standard 5.0, it took one manufacturer 4 years and a second manufacturer 7 years to develop, and for TSA to deploy, the software needed to update the capability of existing explosives trace detection units to meet the new standard. Figure 3 provides our analysis of TSA’s timeline for operationalizing advanced imaging technology detection standards approved from 2010 through 2016. TSA officials said they approved detection standard 3.3 for advanced imaging technology in October 2010 and began deploying technology that met that standard to airports in August 2011. Officials said they approved a subsequent standard, 4.1, in January 2012, began deploying technology to meet it in October 2014, and completed the deployment in September 2017. Officials said it took 3 years to complete deployment because the demand for advanced imaging technology increased over time as airports experienced an increase in passenger volumes, among other reasons. Since 2012, TSA approved two additional detection standards for advanced imaging technology—4.3 in February 2016 and 4.3.1 in August 2016. TSA officials said they have not operationalized these two standards because the manufacturer has not been able to develop the requisite technology. As such, deployed advanced imaging technology units meet standards approved in 2010 and 2012. TSA officials stated that they do not always, or immediately, operationalize detection standards after they are approved. They said they make these decisions on a case-by-case basis, depending on many factors. These include whether: (1) manufacturers have the technological ability, (2) a new technology is in development, and (3) screening technologies already have the capability. Manufacturers do not have the technological ability. TSA officials said manufacturers do not always have the technical ability to meet detection standards. According to officials, it can be challenging for manufacturers to develop the technology necessary to detect new threats as presented in a detection standard, and in some cases impossible without further research and development. For example, TSA officials said that manufacturers have been unable to develop the requisite technology to meet the most recent detection standards (4.3 and 4.3.1) for advanced imaging technology. However, TSA officials said they have expanded their research and development efforts to try to develop the technology. TSA officials told us they plan to continue developing detection standards irrespective of the capabilities of currently deployed technologies so that they can focus on identifying emerging threats. The new detection standards then serve to set expectations for manufacturers about the capability to which they should aspire and justify research and development necessary to realize that capability. To better manage the difference between the capabilities of deployed technologies and the capabilities described in detection standards, TSA officials said they are in the process of developing a new position of Capability Manager, who would be responsible for managing the development of mission-essential capabilities—such as carry-on baggage screening—from start to finish. Officials said they expect this position will help bridge the gap between approved detection standards and the detection capabilities of deployed screening technologies over time, because the managers will have cross- cutting oversight of the process. A new technology is in development. Officials said that they may not operationalize a detection standard if they expect a new type of screening technology will replace an existing one. For example, officials said that TSA is exploring new alarm resolution technologies—that is, screening technologies that are used to determine whether alarms are false positives. Officials said new alarm resolution technologies may replace the bottled liquid scanner in the future, and therefore they have not pursued operationalizing detection standard 2.3. Screening technologies already have the capability. According to TSA officials, new detection standards do not always add significant detection capabilities. For example, officials decided not to operationalize bottled liquid scanner detection standard 3.0 when it was approved in 2017 because the deployed units already had most of the capabilities called for in the detection standard; TSA developed the new standard to better align with standards for other technologies. TSA Deployment Decisions are Generally Based on Logistical Factors, and the Extent to Which TSA Considers Risk Is Unclear Because Decision- Making Lacks Documentation TSA Assesses Risks and Capability Gaps When Determining Acquisition Needs Our review of TSA acquisition documents found that TSA considers risk at the beginning of the screening technologies acquisition process.. Specifically, the agency considers risk in two phases—(1) a risk assessment developed from intelligence information and modeling tools, and (2) an annual capability analysis that analyzes and prioritizes capability gaps and determines mitigation options. Figure 4 provides an overview of TSA’s acquisition process for new screening technologies. Risk assessment. TSA uses intelligence information and modeling tools, such as the Risk and Trade Space Portfolio Analysis, to assess risk to the aviation system. The Risk and Trade Space Portfolio Analysis was developed in 2014 to analyze the security effectiveness of alternate combinations of some aviation security countermeasures. Officials said a recent example of a risk-informed deployment decision influenced by the Risk and Trade Space Portfolio Analysis was TSA’s 2017 deployment of 141 advanced imaging technology units to category III and IV airports. Officials said that around 2014, TSA received intelligence about a potential terrorist threat to airports, as well as the results of covert testing at airports that identified screening vulnerabilities. Officials said a 2014 Risk and Trade Space Portfolio Analysis also identified disparities in screening capabilities at smaller airports. In part because of the vulnerability identified by these three factors, as well as ongoing conversations between TSA senior leadership, the DHS Inspector General, and members of Congress, officials said TSA procured and deployed additional advanced imaging technology units to some category III and IV airports that did not have them. Capability analysis. TSA uses the Transportation Security Capability Analysis Process, a structured decision-making tool, to identify and prioritize capability gaps and help direct agency resources towards closing critical gaps to an acceptable level. When existing screening capabilities do not fully meet TSA’s mission needs, the associated capability gap presents a security risk. As part of the Transportation Security Capability Analysis Process, TSA produces Capability Analysis Reports that identify and recommend solutions to closing capability gaps. Recommendations have included procedural changes, such as new training for TSOs, and investments in new technology. TSA’s investment in computed tomography technology for checkpoint screening of carry-on baggage is an example of TSA’s implementation of the Transportation Security Capability Analysis Process to validate capability gaps and identify recommended courses of action. Officials said that in some cases the agency may identify a capability gap that cannot be resolved to an acceptable level with commercially available screening technology, in which case it will pursue additional research and development. TSA’s Approach to How Risk Informs Deployment Decisions Lacks Documentation TSA officials told us that they operate under the assumption that every airport is a possible entry point into the aviation system for a terrorist, and they do not consider there to be a significant difference in vulnerability among airports when deploying screening technologies. However, officials did not provide analysis or documentation that supported this conclusion. Officials noted the exception to this assumption is a handful of airports that are consistently considered to be the highest risk because of known threats and a high volume of international travelers. Further, officials said that if they had information about a threat to a specific airport that would be mitigated by deploying a screening technology, they would modify their plans for deployment accordingly. However, TSA’s process for how it would change its deployment plans to specific airports based on risk lacks transparency. For example, officials said that as part of the acquisition process they have ongoing discussions with stakeholders about their deployment strategies, including security and intelligence officials who would inform them of any relevant risk information. Officials said these discussions are generally informal and not documented—it was unclear how these discussions have incorporated information about risk in the past, and officials could not provide an example of when risk information at specific airports had directly influenced deployment of technologies to airports in the recent past. In 2018, the agency released its Transportation Security Administration Systems Acquisition Manual, which called for deployment plans to be written documents, and officials said they began documenting their plans for deploying screening technologies in the last two years. TSA officials provided us with one deployment plan—for their 2018 deployment of explosives trace detection units—but we found that it was not transparent about how risk was a factor in officials’ methodology for determining the order of airports to receive the technology. The explosives trace detection plan documented TSA’s schedule of deployment and the roles and responsibilities of relevant stakeholders, among other things. However, while the plan indicated that officials would coordinate with relevant offices within the agency for information about risks that might impact their deployment strategy, we found that the plan did not document how risk had informed their decisions about where and how to deploy the technology, including the assumptions, methodology, and uncertainties considered. Additionally, TSA officials did not document, and could not fully explain, how risk analyses contributed to and factored into the following specific deployment decisions. Deployment of advanced imaging technology to smaller airports. Officials said many factors influenced their decision to deploy advanced imaging technology units to category III and IV airports, including information about threats and a related 2014 risk analysis. However, officials did not document their decisions and could not fully explain their risk analysis, including their process for analyzing and weighing relevant factors. According to officials, the decision was made during discussions with senior leadership, which were risk-informed and supported by whiteboard analyses and classified documents. Additionally, officials told us that, for practical reasons, they deployed units to those category III and IV airports that had the space to accommodate them, but did not further assess the priority of deployment among the smaller airports because they had determined that the risk was uniform and because they planned to deploy the units within a short timeframe. Officials did not document the risk assessment that led to this determination, and could not explain how the three elements of risk—threat, vulnerability, and consequence—were used or assessed. Deployment of targeted threat algorithm. In 2016, TSA deployed a targeted threat algorithm—software to improve detection capabilities—to a limited number of advanced imaging technology units in response to a specific threat. After testing the operational impacts of the software algorithm, the agency decided to stop further deployment. The documentation TSA provided did not explain how officials had analyzed the risk-mitigation benefits of the algorithm, including the underlying assumptions and uncertainty, or how they had weighed those benefits against the operational impacts and costs when they made their decision not to fully deploy the algorithm. TSA officials said they follow the DHS acquisition process to acquire and deploy technologies and their deployment decisions are based on, and informed by, their initial assessments of capability gaps, as well as their understanding that every airport offers equal entry into the aviation system. However, officials had not documented the rationale for these decisions and could not fully explain how risk had informed their decisions about where and in what order to deploy screening technologies. DHS’s Risk Management Fundamentals states that components should consistently and comprehensively incorporate risk management into all aspects of the planning and execution of their organizational missions. Additionally, it says transparency is vitally important in homeland security risk management, and documentation should include transparent disclosure of the rationale behind decisions, including the assumptions, methodology, and the uncertainty considered. By fully disclosing what risk factors are weighed and how decisions are made, TSA officials can better ensure that their deployment of screening technologies matches potential risks (threats, vulnerabilities, and consequences). This is of particular importance given the agency’s limited resources and the fact that screening technologies are not easily relocated. TSA Generally Deploys Screening Technologies Based on Logistical Factors TSA officials said that absent a specific risk to an airport or category of airports that would be mitigated by deploying a screening technology, they consider a number of logistical factors that are aimed at maximizing the operational efficiency of the screening process. These factors influence the number of units of a technology the agency deploys to airports, the order in which they deploy them, and where they are deployed. Officials said they use modeling software to determine the most efficient number of units to allocate to an airport for each type of screening system. This analysis takes into account variables such as the number of flights at an airport, airport passenger volumes, items per passenger, and secondary search rates. Additionally, agency officials said the layout of an airport is a significant determining factor for the number of units it receives. For example, an airport that has centralized checked baggage screening areas will need fewer explosives detection systems than an airport that has checked baggage screening areas dispersed in different locations. Additionally, TSA officials said that logistical and funding factors can influence the order of deployment, including the manufacturer’s ability and resources to develop and deliver technologies. For example, as of June 2019, officials said the agency was in the process of updating the detection capabilities of 62 percent of its advanced technology x-ray fleet because one of its two manufacturers had completed testing and certification of the new technology, but the second manufacturer’s technology had yet to be certified. Officials said they also try to plan their deployment schedule around minimizing disruptions to airport operations, so if an airport could not absorb a full deployment of a technology because it would affect too many passengers, TSA would schedule the deployment in phases to minimize disruptions. Further, TSA officials said that, as a result of these many logistical considerations, they generally fully deploy new screening technologies to category X airports first—generally, airports with the highest passenger volumes—and then proceed in order down to the airport with the lowest passenger volume. Officials said larger airports generally have the infrastructure in place to incorporate new technology without extensive disruption to operations, and they will screen the most passengers by deploying screening technologies to the largest airports first. TSA Does Not Ensure That Screening Technologies Continue to Meet Detection Requirements after Deployment to Airports TSA practices do not ensure that screening technologies continue to meet detection requirements after they have been deployed to airports. According to agency officials, the agency uses certification to confirm that technologies meet detection requirements before they are deployed to airports, and calibration to confirm that technologies are at least minimally operational while in use at airports. Officials stated these processes are sufficient to assure TSA that screening technologies are operating as intended. However, while certification and calibration serve important purposes in the acquisition and operation of screening technologies, they have not ensured that TSA screening technologies continue to meet detection requirements after they have been deployed. Certification occurs prior to deployment. TSA’s certification process is designed to ensure screening technologies meet detection requirements during the acquisition process, prior to the procurement and deployment of the technologies, but it does not ensure screening technologies continue to meet detection requirements after deployment. As previously described, manufacturers provide an initial submission of the screening technology to TSA for certification testing as part of the acquisition process. During the certification process, S&T’s Transportation Security Laboratory tests the technology under controlled conditions to determine whether it meets TSA’s detection requirements. After TSA certifies that a screening technology meets detection requirements and it undergoes additional testing to determine whether it meets other TSA requirements in controlled testing facilities, TSA may deploy it to select airports for operational testing and evaluation to determine how it performs in an airport setting. Certification testing demonstrates that a manufacturer’s screening technology meets detection requirements during the acquisition process, which allows TSA to determine whether it should continue to consider the technology for acquisition. Certification does not ensure that deployed technologies continue to meet detection requirements because it does not account for the possibility that performance of technologies can degrade over time throughout the technologies’ lifecycles after deployment. For example, in 2015 and 2016, DHS removed a sample of deployed explosives trace detection and bottled liquid scanner units from airports for testing in the Transportation Security Laboratory. The laboratory concluded that some deployed units for each technology it tested no longer met detection requirements— either the required probability of detection for certain explosives or the required rate for false alarm, or both. One explosives trace detection unit that was tested was found to have a probability of detection much lower than required. According to TSA officials, the units did not meet detection requirements because they were not adequately maintained, which affected their performance. In light of this, officials stated that they introduced better controls to ensure that routine preventative maintenance is performed as required. However, because TSA does not test the units after they are deployed to airports, it cannot determine the extent to which these controls ensure technologies continue to meet detection requirements. Officials noted that TSA uses a layered security approach at airports, so if one layer should fail—such as a deployed technology—the agency can still rely on other security measures among the various layers of security to detect threats. We have previously reported on the importance that TSA ensure each measure is effective to make the best use of its limited resources, in order to serve its aviation security mission. Calibration does not test whether technologies meet detection requirements. TSA officials stated that daily calibration also helps ensure that screening technologies continue to meet detection requirements after deployment. However, while calibration demonstrates that the screening technology is at least minimally operational, it is not designed to test whether the screening technology meets detection requirements. For example, each explosives detection system is calibrated with an operational test kit that contains items of various densities. To calibrate explosives detection systems, a TSO must run the operational test kit through the unit and verify that the item is correctly displayed on the monitor (see figure 5 below). This process demonstrates whether the system can identify the known items’ densities, but it does not ensure that the system meets detection requirements. As a result, calibration could indicate that the unit is functioning even when its detection capabilities have degraded—that is, calibration determines that the technology is functional, but it does not ensure that the technology is meeting detection requirements. TSA officials stated that they plan to develop a process to review screening technologies on an annual basis to analyze their performance, including detection over time. TSA officials stated that, as of August 2019, they were actively working on developing a review process for the explosives detection system but did not have a date for when they planned to complete it. TSA officials for the passenger and carry-on screening technologies stated that they had not yet started developing a review process for those technologies and the timeline for developing a review process will depend on funding. TSA officials also noted that there are challenges in designing a process to ensure that screening technologies continue to meet detection requirements after deployment. For example, TSA and S&T officials stated that it is not feasible to conduct live explosives testing in airports. Further, according to TSA officials, while it is relatively easy to temporarily transfer smaller screening technologies, such as explosives trace detection and bottled liquid scanner units, to a controlled setting for live explosives testing, it would not be feasible to transfer larger installed units, such as advanced imaging technology. Although testing with live explosives in an airport poses undue risks and transferring larger machines for testing may be costly, TSA could develop other measures. TSA officials stated that there is no requirement to ensure that its screening technologies continue to meet detection requirements after deployment to airports. However, Standards for Internal Control in the Federal Government calls for agencies to establish and operate a system to continuously monitor the quality of performance over time. Without taking additional steps to ensure screening technologies are meeting detection requirements, TSA may run the risk that its deployed screening technologies are not detecting explosives and other prohibited items. Developing and implementing a process to monitor screening technologies’ detection performance over time would help provide TSA assurance that screening technologies continue to meet detection requirements, as appropriate, after deployment. In doing so, TSA would also be better positioned to take any necessary corrective actions if or when screening technologies no longer operate as required. TSA Spent an Estimated $3.1 Billion to Purchase, Deploy, Install, and Maintain its Fiscal Year 2018 Inventory of Screening Technologies We estimate that TSA spent $3.1 billion to purchase, deploy, install, and maintain its inventory of screening technologies as of the end of fiscal year 2018, based on agency estimates of costs. Of this $3.1 billion, we estimate that TSA spent 71 percent to purchase screening technologies, 9 percent to deploy, about 12 percent to install, and, for fiscal year 2018, about 9 percent to maintain them for 1 year. The highest estimated total expenditures on a per-technology basis were for explosives detection systems ($2.1 billion, or 68 percent), advanced technology x-ray ($443 million, or 14 percent), explosives trace detection ($227 million, or 7 percent), and advanced imaging technology ($197 million, or 6 percent). Table 1 provides information on estimated expenditures for TSA’s September 2018 inventory of screening technologies, by screening technology and life-cycle phase (i.e., purchase, deploy, install, and maintain). Appendix III provides additional information on estimated TSA expenditures, such as prices per unit of technology and estimated expenditures by airport category. TSA has also incurred costs, or has plans to incur costs, for additional actions related to screening technologies. Specifically, it has also incurred costs for modifications to commercial airport facilities to accommodate screening technologies. Further, TSA estimates additional life-cycle costs of $804 million to acquire, deploy, and maintain computed tomography systems through fiscal year 2026. The following provides more information on these estimated expenditures. Airport modifications. TSA incurs costs related to modifying commercial airports to accommodate certain screening technologies, such as checked baggage screening systems (e.g., explosives detection systems). In December 2017, we reported that TSA had obligated at least $783 million from fiscal years 2012 through 2016 to reimburse airports for the allowable design and construction costs associated with installing, updating, or replacing screening technology. For example, TSA may enter into agreements to reimburse airport operators for a percentage of the allowable design and construction costs associated with facility modifications needed for installing, updating, or replacing in-line explosives detection systems. In-line screening systems use conveyor belts to route checked luggage through explosives detection systems, which capture images of the checked baggage to determine if a bag contains threat items not permitted for transport, including explosives. From fiscal years 2012 through 2016, agreements for TSA reimbursements to airports for checked baggage screening systems generally ranged in value from $50,000 to $150 million. As we reported in December 2017, in general, depending on the airport’s size, TSA will reimburse 90 or 95 percent of the allowable, allocable, and reasonable cost of certain projects. For other projects, TSA may provide 100 percent reimbursement—for example, where existing systems require the correction of security or safety deficiencies. Computed tomography. In addition to its fiscal year 2018 inventory, TSA is currently in the process of deploying computed tomography to commercial airports to replace advanced technology x-ray systems. Computed tomography technology applies sophisticated algorithms to detect explosives and other prohibited items and creates a 3D image of carry-on baggage that a TSO can view and rotate 360 degrees. In fiscal year 2018, TSA determined that computed tomography is the best technology available to address rapidly evolving threats in the transportation sector, and plans to eventually deploy it to all checkpoints and replace advanced technology x-ray technology. As recorded in TSA’s Deployed Locations Report, TSA had deployed 11 computed tomography systems to category X and I airports as of September 24, 2018. According to TSA’s September 2018 life-cycle cost estimates, the agency plans to field 883 units by fiscal year 2026. As shown in table 2, TSA also planned to spend $805 million to purchase, deploy, and maintain this new technology through fiscal year 2026. However, in August 2019, TSA officials told us that they expect this estimated total procurement cost of $805 million to likely decrease as the per unit cost had decreased from $400,000 to $233,000 in the initial fiscal year 2019 contract for computed tomography. Conclusions TSA has invested billions of dollars in screening technologies as it responds to terrorists’ attempts to use homemade explosives to disrupt and damage civil aviation. Forecasted increases in passenger volumes and ongoing terrorist threats make it imperative that TSA employ recommended management and internal control practices. TSA could help ensure that critical detection standards are developed in accordance with approved practices, and that agency goals are effectively met by updating its guidance for developing standards. Additionally, by documenting key decisions in the development of detection standards, TSA could better assure the effectiveness of decision-making and the retention of organizational knowledge in the face of inevitable changes in personnel. Similarly, when making technology deployment decisions, incorporating DHS-recommended practices for risk management would improve TSA’s ability to effectively fulfill its mission to secure the nation’s civil aviation system. While TSA assesses risk when deciding whether to invest in a new technology to address an identified capability gap, it is unclear the extent to which it considers risk when determining where and in what order to deploy approved screening technologies to airports. DHS guidance for homeland security risk management calls for risk to be considered consistently and comprehensively in all aspects of an agency’s work. Additionally, risk management includes transparent disclosure of the rationale behind decision-making so that stakeholders can understand how key factors were weighed. Incorporating these risk management principles into its decision-making for deploying screening technologies to airports would allow TSA to align its deployment strategies with potential threats, vulnerabilities, and consequences. Lastly, TSA cannot ensure that its screening technologies continue to meet detection requirements after they have been deployed to airports. Developing and implementing a policy to ensure that TSA’s screening technologies continue to meet their respective detection requirements after deployment may assure the agency that its deployed screening technologies are effectively detecting explosives and other prohibited items that they are designed to identify, which is a critical part of TSA’s mission. Recommendations for Executive Action We are making the following five recommendations to TSA: The TSA Administrator should update TSA guidance for developing and approving screening technology explosives detection standards to reflect designated procedures, the roles and responsibilities of stakeholders, and changes in the agency’s organizational structure. (Recommendation 1) The TSA Administrator should require and ensure that TSA officials document key decisions, including testing and analysis decisions, used to support the development and consideration of new screening technology explosives detection standards. (Recommendation 2) The TSA Administrator should require and ensure that TSA officials document their assessments of risk and the rationale—including the assumptions, methodology, and uncertainty considered—behind decisions to deploy screening technologies. (Recommendation 3) The TSA Administrator should develop a process to ensure that screening technologies continue to meet detection requirements after deployment to commercial airports. (Recommendation 4) The TSA Administrator should implement the process it develops to ensure that screening technologies continue to meet detection requirements after deployment to commercial airports. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this product to DHS for comment. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reproduced in full in appendix IV. DHS concurred with our five recommendations and described actions undertaken or planned to address them. TSA also provided technical comments, which we incorporated as appropriate. With regard to our first recommendation that TSA update guidance for developing and approving screening technology explosives detection standards, DHS concurred and stated that TSA has included updated guidance in its Requirements Engineering Integrated Process Manual, which was completed in September 2019. According to DHS, the update provides TSA’s process for developing and approving explosives detection standards, including designated procedures and roles and responsibilities of stakeholders, and reflects organizational changes to TSA. TSA provided us with the Requirements Engineering Integrated Process Manual in November 2019, concurrent with DHS comments. We will review the update and the extent to which it addresses the recommendation. This action, if fully implemented, should address the intent of the recommendation. DHS concurred with our second recommendation that TSA ensure that officials document key decisions supporting the development of screening technology explosives detection standards. DHS stated that the updated Requirements Engineering Integrated Process Manual describes the process for documenting key decisions, including testing and analysis decisions, in the development of new detection standards. We will review the update and the extent to which it addresses the recommendation. This action, if fully implemented, should address the intent of the recommendation. DHS also concurred with our third recommendation that TSA document its assessments of risk and the rationale behind its decisions to deploy screening technologies. According to DHS, TSA has instituted an improved process for documenting elements that contribute to deployment decisions—TSA’s August 2019 deployment plan for computed tomography is an example of the process. DHS stated that TSA will continue to include a comparable level of documentation in future deployment plans for screening technologies. We agree the computed tomography deployment site selection strategy is an example of how TSA can document the rationale governing the deployment of a screening technology. Future plans can further benefit by explaining the risk analysis itself along with the role that risk considerations played in the selection of airports for deployment. Formalizing guidance that directs TSA officials to document risk assessments and the rationale behind deployment decisions would help TSA ensure that its deployment of screening technologies matches potential risks. DHS concurred with our fourth and fifth recommendations that TSA, respectively, develop and implement a process to ensure that screening technologies continue to meet all detection requirements after deployment to commercial airports. DHS stated that TSA will develop recurring individual post implementation reviews (PIR) for all screening technologies in accordance with DHS Directive 102-01, to assess multiple aspects of system performance, including detection over time. DHS also stated that TSA intends to examine the component performance of the detection chain rather than a direct measure of detection requirements, due to the limitations of using live explosives and simulants. DHS stated that because the detection chain for each technology is unique and will require individual reviews, TSA is developing a policy on the PIR development process, which it estimates will be completed by March 31, 2020. We appreciate the limitations live explosives and simulants present in testing and the need for reviews that are tailored to meet the unique characteristics of each screening technology. TSA plans to implement the review process on the first screening technology by December 31, 2020. These actions, if implemented across all applicable screening technologies, should address the intent of the recommendations. We are sending copies of this report to the appropriate congressional committees and to the Acting Secretary of Homeland Security. In addition, this report is available at no charge on the GAO website at http://gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or russellw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix V. Appendix I: Transportation Security Administration (TSA) Screening Technologies This appendix presents additional details on the TSA screening technologies we reviewed, including their function and the number of units deployed. Appendix II: Objectives, Scope, and Methodology This report addresses Transportation Security Administration’s (TSA) processes for developing and deploying screening technologies to airports regulated by TSA (i.e., “commercial” airports). Specifically, we examined 1. the extent to which TSA has a process for developing explosives detection standards for screening technologies in response to identified emerging threats; 2. how TSA operationalizes detection standards to update detection capabilities; 3. the extent to which TSA has considered risk when deploying screening technologies to commercial airports; 4. the extent to which TSA ensures screening technologies meet the requirements for detection standards after deployment; and 5. TSA estimated expenditures to purchase, deploy, install, and maintain its inventory of screening technologies as of the end of fiscal year 2018. To address all of our objectives, we identified 11 screening technologies TSA used to screen passengers’ identification documents, person, carry- on bags, and checked baggage at commercial airports as of September 24, 2018, as recorded in TSA’s Government Property Management database. The seven screening technologies in use at commercial airport passenger checkpoints were advanced imaging technology, advanced technology x-ray machine, bottled liquid scanner, boarding pass scanner, chemical analysis device, threat image projection x-ray, and walk-through metal detector. The credential authentication technology and computed tomography, also used at checkpoint screening, were deployed and in use at select airports as TSA pilot projects. The two TSA screening technologies in use at commercial airports for checked baggage were explosives detection systems and explosives trace detection (TSA also uses explosives trace detection for checkpoint screening). We assessed the reliability of TSA’s inventory data by interviewing agency officials and reviewing related documentation, such as the database user manual, among other things. We determined the data were sufficiently reliable to determine the type and number of TSA screening technologies deployed as of September 2018. To better understand how TSA screening technologies have been used, we reviewed reports from the U.S. Department of Homeland Security (DHS) Office of the Inspector General, the Congressional Research Service, past GAO reports, and relevant DHS and TSA documentation, such as DHS and TSA strategic documents and acquisition plans. To observe TSA screening procedures and the operation of screening technologies in the airport setting, we conducted site visits to seven commercial airports. During these visits we discussed screening technology issues with TSA federal security directors or their representatives. We selected these airports to reflect a range of airport categories, technologies, and geographic diversity. The results of these site visits and interviews cannot be generalized to all commercial airports, but they provided us with important context about the installation, use, and maintenance of TSA screening technologies across the different types of airports that TSA secures. We also conducted a site visit to the TSA Systems Integration Facility to better understand how screening technologies are tested and evaluated prior to deployment. Further, we interviewed officials from two industry associations and one screening technology manufacturers association based on input from TSA and DHS Science and Technology Directorate (S&T) officials. To determine the extent to which TSA has a process for developing explosives detection standards, we examined TSA documents such as approved detection standards, action memos summarizing support for proposed detection standards, the Detection Requirements Update Standard Operating Procedure, and briefing slides describing TSA’s process, as of August 2019, for assessing threat materials and developing detection standards. We also evaluated Material Threat Assessment reports that summarized the testing and analyses performed by S&T’s Homemade Explosives Characterization Program, in coordination with S&T laboratories, to characterize (identify the physical density and mass of) explosive materials for detection standards developed from fiscal years 2014 through 2018. We evaluated S&T’s testing and analyses in accordance with TSA and S&T guidance to determine the extent to which these steps were consistent across materials; we did not analyze the sufficiency of the testing and analyses. We also assessed TSA and S&T processes and the extent to which they were documented in accordance with Standards for Internal Control in the Federal Government, and discussed the details of steps taken to develop standards with relevant TSA and S&T officials. In addition, we conducted a site visit to S&T’s Commercial Aircraft Vulnerability and Mitigation Program testing site at the U.S. Army Aberdeen Test Center, Maryland, to better understand how S&T tests the vulnerability of commercial aircraft to explosive materials. To understand TSA’s process and timelines for operationalizing—putting into effect—detection standards, we requested information from TSA about screening technologies subject to explosives detection standards, deployed as of September 24, 2018: advanced imaging technology, advanced technology x-ray, bottled liquid scanner, explosives detection systems, and explosives trace detection. We requested information about the detection standards that deployed screening technologies met, as of August 2019, as well as subsequently approved detection standards, including the date the standards were approved, the dates when TSA achieved certain acquisition milestones when developing and deploying the associated technologies, and the status of ongoing and upcoming efforts to update detection capabilities to meet new standards. We identified the acquisition milestones by reviewing a past GAO report on TSA’s acquisition process and in consultation with GAO acquisition experts. We also reviewed a classified TSA report that evaluated the performance of a particular algorithm in order to understand TSA’s process for developing new screening technologies to meet detection standards. In addition, we reviewed relevant acquisition documents, such as DHS’s Acquisition Management Instruction 102, the 2018 Transportation Security Administration Systems Acquisition Manual, acquisition decision memos, acquisition plans, and Operations Requirements Documents. To understand TSA’s process for deciding whether to operationalize detection standards, we requested and reviewed available documentation for the standards that TSA had not operationalized, such as an operational status transition memo for bottled liquid scanner, and interviewed TSA officials about those decisions. To understand how TSA had considered risk in its approach to deploying screening technologies at airports, we reviewed available documentation related to TSA’s deployment decisions. These included decision memos from acquisition review board meetings and action memos to TSA leadership; risk registers for checked baggage and checkpoint acquisition programs; available deployment plans, such as the agency’s Action Plan for deploying explosives trace detection units to airports in 2018; and acquisition guidance. To understand how TSA assesses capability needs and gaps, we interviewed agency officials about TSA’s Transportation Security Capability Analysis Process and reviewed capability analysis reports from 2018 and 2019, as well as TSA’s prioritized list of capability gaps and needs. We also interviewed acquisition officials, including TSA’s Component Acquisition Executive, about the role of risk in deployment decisions and requested written responses to specific questions. We assessed TSA’s decision-making process for deploying and updating screening technologies, generally, against DHS risk management criteria, such as DHS’s Risk Management Fundamentals. We also reviewed related areas of risk management and decision-making to understand the context in which TSA makes deployment decisions. Specifically, we reviewed the 2017 Transportation Sector Security Risk Assessment and the Cities and Airports Threat Assessment reports to understand the risks facing the nation’s aviation system. We also reviewed TSA’s enterprise risk management framework, such as the Enterprise Risk Management Policy Manual, to understand the role it played in TSA’s deployment decisions. We also interviewed an official from TSA’s Enterprise Performance and Risk office and the Executive Risk Steering Committee. To understand how TSA categorizes airports, we reviewed a 2017 Nationwide Airport Categorization Review memo from TSA’s Security Operations office and interviewed Security Operations officials. To understand how TSA deploys screening technologies across airports and categories of airports, we analyzed TSA’s Deployed Locations Report, which reported on technologies that were in use or available for use at commercial airports from September 24 through September 30, 2018. We also reviewed TSA’s standardized methodology for determining the most efficient number of screening technologies at an airport. Additionally, we reviewed TSA’s Strategic Five-Year Technology Investment Plan from 2015 and the 2017 Biennial Refresh to understand TSA’s plans for ongoing investment in screening technologies. We reviewed various throughput data, such as annual passenger throughput for all commercial airports for fiscal year 2018 and enplanements data for calendar year 2017, to understand and compare TSA’s allocation of screening technologies with throughput data across airports and airport categories. We used this analysis to identify airports that had an unusually large or small number of screening technologies within a category, and interviewed TSA officials to understand the decisions that led to the allocation of screening technologies across airports and airport categories. In addition, we reviewed the status of TSA’s limited deployment of computed tomography units to checkpoints. Specifically, we reviewed TSA’s 2018 Deployment Site Selection Strategy, which described the airports to which TSA would deploy computed tomography units and the methodology it used to select them, slides from recent conferences TSA held with industry representatives where it shared its plans for transitioning to computed tomography, and relevant Operational Requirements Documents. We also interviewed agency officials about their plans for the limited deployment and TSA’s transition from advanced technology x-ray to computed tomography for checkpoint screening. To determine the extent to which TSA ensures its screening technologies continue meeting detection requirements after deploying them to airports, we reviewed TSA acquisition detection requirements for each screening technology as well as TSA guidance related to the testing and evaluation of screening technologies identified by TSA officials in interviews. We also interviewed TSA and S&T Transportation Security Laboratory officials about TSA requirements to test screening technologies, both prior to and after deployment, to determine the extent to which they meet detection requirements. We also observed transportation security officers and a transportation security specialist for explosives conduct verification and calibration procedures on screening technologies at the airports we visited. We reviewed TSA guidance to determine the extent to which its procedures ensure that screening technologies continued to meet detection requirements in airports. We then evaluated the procedures against Standards for Internal Control in the Federal Government for monitoring. To identify TSA’s estimated expenditures to purchase, deploy, install, and maintain its inventory of screening technologies as of the end of fiscal year 2018, we reviewed TSA programs’ life-cycle cost estimates, which, for the purposes of acquisition planning, provide per unit estimates of the cost to purchase, deploy, install, and maintain passenger and checked baggage screening technologies. We chose this methodology in consultation with TSA officials and after determining that historical records of obligations and expenditures do not provided consistent and sufficient detail for the purposes of our analysis. The life-cycle cost estimates include relevant phases for each screening technology (i.e., purchase, deploy, install, and maintain), although not all technologies have cost estimates for each phase of the life cycle. For example, some screening technologies may not specify deployment costs because such costs are included in the initial purchase price of the unit. In other cases, the technology does not have a deployment cost because the unit is small and portable, and placement of the unit is therefore handled by TSA airport staff at no charge. Estimated expenditures for installation also include costs associated with site acceptance testing, which is performed when a system is installed at its operational location. Unlike the purchase, deploy, and install unit prices, the maintenance unit price is the yearly cost of maintenance for one unit, and therefore recurs every year. We assessed the reliability of the life-cycle cost estimates by reviewing documentation on the development of the estimates and interviewing TSA officials, among other things, and determined the estimates were sufficiently reliable for the purpose of estimating the amount of funds spent on acquiring, deploying, installing, and maintaining TSA’s inventory of screening technologies as of the end of fiscal year 2018. Because the life-cycle cost estimates were developed in different years, we used TSA guidelines to adjust costs for inflation and convert our estimates to 2018 dollars. We multiplied these estimates against the number of screening technologies deployed to commercial airports as of September 24, 2018, using data from TSA’s Government Property Management database. For computed tomography, we also obtained information on price and quantity from the technology’s life-cycle cost estimate and TSA officials. We also reviewed prior GAO work on TSA cost sharing programs for airport facility modification related to installation of some of the technologies in our review. We conducted this performance audit from April 2018 to December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Transportation Security Administration (TSA) Estimated Expenditures for Screening Technologies We estimate that TSA spent $3.1 billion to purchase, deploy, install, and maintain its inventory of screening technologies, as of the end of fiscal year 2018, based on agency estimates of costs. Tables 3 through 5 provide information on estimated TSA expenditures by screening technology, life-cycle phase, and airport category. To analyze TSA’s estimated spending to purchase, deploy, install, and maintain its inventory of screening technologies as of the end of fiscal year 2018, we reviewed TSA life-cycle cost estimates, which, for the purposes of acquisition planning, provide per-unit estimates of the cost to purchase, deploy, install, and maintain passenger and checked baggage screening technologies at TSA-regulated airports (i.e., “commercial” airports). Because the life-cycle cost estimates were developed in different years, we used the same guidelines used by TSA to adjust costs for inflation to convert our estimates to 2018 dollars. We multiplied these estimates against the number of screening technologies deployed to commercial airports as of September 24, 2018. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kevin Heinz (Assistant Director), Barbara Guffy (Analyst in Charge), Kelsey Burdick, Jonathan Felbinger, Tyler Kent, Thomas Lombardi, Erin O’Brien, Kya Palomaki, Rebecca Parkhurst, and Dina Shorafa made key contributions to this report. In addition, key support was provided by Chuck Bausell, Richard Cederholm, Dominick Dale, Aryn Ehlow, Michele Fejfar, Eric Hauswirth, Richard Hung, and Alexis Olson.
Why GAO Did This Study TSA is responsible for overseeing security operations at roughly 440 TSA-regulated airports as part of its mission to protect the nation's civil aviation system. TSA uses technologies to screen passengers and their bags for prohibited items. The TSA Modernization Act includes a provision for GAO to review TSA's deployment of screening technologies, and GAO was asked to review the detection standards of these screening technologies. This report addresses, among other things, (1) how TSA operationalizes detection standards, (2) the extent to which TSA considered risk when making deployment decisions, and (3) the extent to which TSA ensures technologies continue to meet detection requirements after deployment. GAO reviewed DHS and TSA procedures and documents, including detection standards; visited DHS and TSA testing facilities; observed the use of screening technologies at seven airports, selected for varying geographic locations and other factors; and interviewed DHS and TSA headquarters and field officials. What GAO Found The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) operationalizes, or puts into effect, detection standards for its screening technologies by acquiring and deploying new technologies, which can take years. Detection standards specify the prohibited items (e.g., guns, explosives) that technologies are to detect, the minimum rate of detection, and the maximum rate at which technologies incorrectly flag an item. TSA operationalizes standards by adapting them as detection requirements, working with manufacturers to develop and test new technologies (software or hardware), and acquiring and deploying technologies to airports. For the standards GAO reviewed, this process took 2 to 7 years, based on manufacturers' technical abilities and other factors. TSA's deployment decisions are generally based on logistical factors and it is unclear how risk is considered when determining where and in what order technologies are deployed because TSA did not document its decisions. TSA considers risks across the civil aviation system when making acquisition decisions. However, TSA did not document the extent risk played a role in deployment, and could not fully explain how risk analyses contributed to those decisions. Moving forward, increased transparency about TSA's decisions would better ensure that deployment of technologies matches potential risks. Technology performance can degrade over time; however, TSA does not ensure that technologies continue to meet detection requirements after deployment to airports. TSA certifies technologies to ensure they meet requirements before deployment, and screeners are to regularly calibrate deployed technologies to demonstrate they are minimally operational. However, neither process ensures that technologies continue to meet requirements after deployment. In 2015 and 2016, DHS tested a sample of deployed explosives trace detection and bottled liquid scanner units and found that some no longer met detection requirements. Developing and implementing a process to ensure technologies continue to meet detection requirements after deployment would help ensure that TSA screening procedures are effective and enable TSA to take corrective action if needed. What GAO Recommends GAO is making five recommendations, including that TSA document analysis of risk in deploying technologies, and implement a process to ensure technologies continue to meet detection requirements after deployment. DHS agreed with all five recommendations and said TSA either has taken or will take actions to address them.
gao_GAO-20-243
gao_GAO-20-243_0
Background Table 1 describes the activities that USDA’s mission areas and major staff offices perform as part of five types of administrative services that USDA business centers are to provide under the Secretary of Agriculture’s November 2017 memorandum. At USDA, eight mission areas and three of the 13 major department-level staff offices, including five sub-offices, are responsible for delivering or overseeing these five types of administrative services (see fig. 1). USDA’s eight mission areas carry out the department’s program responsibilities through 18 agencies. Five mission areas consist of multiple agencies, while three consist of a single agency, as shown below. In general, USDA’s eight mission areas deliver the administrative services, and the staff offices develop regulations, guidance, and policies describing how mission areas should deliver those services and oversee the mission areas’ performance. In addition, the staff offices deliver some administrative services on a department-wide or shared-services basis. According to USDA officials, the mission areas are to follow the regulations, guidance, and policies developed by the staff offices but are allowed considerable discretion in how they deliver administrative services based on their missions and program needs. According to USDA officials and documentation, service delivery is typically handled by a mission area’s field offices at the regional, state, or local level; however, with the establishment of the business centers, more service is being delivered at the mission area’s headquarters level. USDA Has Established Business Centers in All of Its Eight Mission Areas, and the Business Centers Vary in Establishment Date, Structure, and Services USDA has consolidated administrative services and established business centers in all of its eight mission areas in accordance with the Secretary’s November 2017 memorandum. The eight existing business centers vary in when they were established. As shown in figure 2, three mission areas had business centers before the Secretary’s memorandum. However, even the mission areas that had business centers before the Secretary’s November 2017 memorandum subsequently changed the way they provide administrative services, specifically with regard to information technology services. Two mission areas—Marketing and Regulatory Programs and Research, Education, and Economics—added information technology to their business centers during fiscal year 2019. In 2019, the Natural Resources and Environment mission area, which already included information technology in its business center, changed the position descriptions of certain employees to more accurately reflect that their major duties are considered to be information technology work. Of the five new business centers established since the Secretary’s memorandum, establishment of the FPAC Business Center entailed the most significant transformation. Typically, each business center is located within one of the mission area’s component agencies and the center’s leader reports directly to that agency’s leadership (see table 2). The FPAC Business Center is the only business center established as a separate agency within a mission area. Changes that occurred at other mission areas in transitioning to new business centers included modifying reporting structures for services that had already been consolidated. For example, according to Rural Development officials, the mission area had a business services entity prior to the Secretary’s memorandum. To establish a business center as envisioned by the Secretary’s memorandum, the mission area changed the reporting structure for administrative operations in the field. Previously, field employees associated with an administrative service reported directly to leadership in Rural Development’s state offices. These employees now report directly to headquarters leadership specific to their administrative service. However, according to Rural Development officials, no employees were physically moved. As of November 2019, most of the business centers were providing all five of the main administrative services that the Secretary’s November 2017 memorandum envisioned—specifically, financial management, human resources, information technology, procurement, and property management. Two business centers have chosen to provide financial management services differently from the other administrative services. Specifically: Food Safety. According to officials in the Food Safety mission area, as part of its reorganization, that mission area grouped all of the administrative services except financial management under the Chief Operating Officer. However, it grouped the budget office, which performs financial management services, under the agency’s Chief Financial Officer because it preferred to keep this office with mission- related program offices, which report directly to the Deputy Administrator. Natural Resources and Environment. Officials in the Natural Resources and Environment mission area said that unlike other administrative services, which are grouped under the business center, financial management responsibilities are divided between the business center’s Office of Strategic Planning, Budget, and Accountability and the Forest Service’s Office of the Chief Financial Officer. According to these officials, this arrangement strengthens internal controls by separating responsibility for allocating and spending financial resources from responsibility for accounting for how the resources are spent. One business center—in the Trade and Foreign Agricultural Affairs mission area—provides information technology and financial management services for Foreign Agricultural Service employees and has agreements in place with other USDA components to provide human resources, procurement, and property management services for the mission area. According to the Deputy Assistant Secretary for Administration, USDA accepted these mission areas’ decisions about financial management because they ensured accountability of field-level staff to the administrative service’s headquarters leadership. USDA Has Developed Metrics for Managing Administrative Services but Has Not Assessed the Effectiveness and Impact of Its Business Centers According to USDA’s Deputy Assistant Secretary for Administration, the department regularly reviews data on administrative services, including services provided by the business centers. However, the department does not use these or other data to assess the effectiveness and impact of its business centers and as of November 2019 did not plan to do so. Beginning in 2018, USDA created an online monitoring system to compile data from mission areas on the status of their administrative services. The system has “dashboards” displaying data specific to financial management, human resources, information technology, procurement, and property management, among other things. Each of the dashboards presents metrics gathered from various databases across mission areas. For example, the dashboards for human resources include the number of employees by organization, along with their geographic location, retirement eligibility, occupation, and any skills gaps. According to USDA officials, the dashboards allow department-level review of a large number of metrics on a range of administrative activities performed by the business centers—data that previously were available only to each mission area. USDA’s Deputy Secretary discusses performance on various dashboards with mission area and staff office leadership at quarterly review meetings. However, the department has not used dashboards or associated metrics to assess the effectiveness and impact of the business centers. Specifically, the department has not assessed the impact that the business centers have had on USDA’s customer service; human resources, including hiring; and overall functionality. According to the Deputy Assistant Secretary for Administration, creating new business centers and changing existing ones has contributed to positive results, such as savings from reducing the size of USDA’s vehicle fleet, but USDA’s Departmental Administration has not systematically compared USDA’s ability to deliver its administrative services before and after these reforms. For example, the department has not examined whether the reforms have enabled mission areas to reduce costs, reduce processing times, or identify previously unknown issues that need to be addressed. According to USDA officials, these business center reforms broadly addressed the first policy goal in USDA’s May 2018 strategic plan for fiscal years 2018 through 2022—namely that USDA programs be delivered efficiently, effectively, and with integrity and a focus on customer service. However, USDA officials told us that they have not yet attempted to measure how the business center reforms have met the three overarching policy goals identified in the Secretary of Agriculture’s November 2017 memorandum, which called for the business center reforms to (1) improve customer engagement, (2) maximize efficiency, and (3) improve agency collaboration. In addition, some stakeholders we interviewed expressed concern about progress toward these goals as USDA works to implement the business center reforms. For example: Staffing vacancies. Some stakeholders raised concerns about the impact of vacancy rates at business centers on customer engagement. The two largest business centers created since November 2017—in FPAC and Rural Development—had position vacancy rates above 27 percent as of September 30, 2019. Officials with one group representing farmers who are customers of the FPAC and Rural Development mission areas told us they were concerned that (1) vacancies in the business center may be leading to vacancies among program staff in the field, (2) complaints related to staffing have increased over the past few years, and (3) staffing vacancies in the field are negatively affecting customer service. An official from another group representing farmers told us that the group is hearing from its members that there have been a lot of changes within USDA lately and field offices seem to be understaffed and overwhelmed even after the creation of the business centers, which could be negatively affecting the quality of customer service. Vacancies at the FPAC and Rural Development business centers, particularly among staff responsible for hiring USDA program staff in the field, could therefore affect both access to and the quality of technical assistance. Employee concerns. In the FPAC Business Center, officials from one union representing employees told us that confusion among employees about their roles and responsibilities could affect both internal employee satisfaction and the overall ability of the business center to serve the FPAC mission and its customers. Specifically, these union officials noted employees’ confusion about how to reconcile differences among the work procedures that each of the three FPAC agencies used before the reorganization. Officials from this and one other union also stated that employees have reported that business center leadership has not taken action to address such employee concerns. As a result, according to officials from both unions, FPAC business center employees are experiencing low morale, confusion, frustration, and anxiety about the changes, affecting their ability to deliver services. In response, FPAC officials told us in November 2019 that the FPAC Business Center is working on empowering employees, hiring, establishing a culture of accountability, building trust and engagement, and addressing other issues that have arisen in the business center’s first year of operation. For example, these officials said they were reviewing the business center’s organizational structure to determine whether there is a need for adjustments to further streamline operations and improve service. USDA officials cited several reasons the department has not assessed the effect of the business center reform effort undertaken in response to the Secretary’s November 2017 memorandum. According to the Deputy Assistant Secretary for Administration, the absence of evaluation is partly attributable to the department’s strategy of delegating responsibility to the mission areas to implement business centers; this strategy aims to give the mission area leadership ownership of the reform effort and help ensure their buy-in. The Deputy Assistant Secretary for Administration also said that the department has focused on implementing the reforms called for in the memorandum rather than on evaluating the results. USDA officials also pointed out that the reform effort is relatively recent, with five of the business centers having been created since June 2018. However, the Deputy Assistant Secretary for Administration acknowledged the importance of evaluating and communicating any benefits derived from the business center reform effort as it moves forward. Our prior work has shown that a key practice to consider during agency reform efforts is the establishment of clear outcome-oriented goals and performance measures to assess the reform’s effectiveness and impact. As we have previously reported, a performance goal is a target level of performance expressed as a measurable objective; a performance measure includes an assessment of results compared with intended purpose that can be expressed quantitatively or in another way that indicates a level or degree of performance. Monitoring performance against goals allows agencies to assess progress and address problems as necessary. While USDA has not developed goals and measures to assess the effectiveness and impact of the business center reforms, the department has set goals for a limited number of administrative services, including hiring, the number of fleet vehicles, and travel and conference spending. In addition, parts of the department have developed goals and measures for the administrative services their business centers provide. For example, officials in the Research, Education, and Economics mission area reported nine key performance indicators for their administrative services, such as specific goals and measures for the timeliness of posting job opportunity announcements. Developing appropriate performance goals and measures and systematically assessing the effectiveness and impact of the business center reforms could help the department determine whether the reforms are meeting the Secretary’s overarching policy goals and improving the delivery of administrative services to support the department’s mission and program goals. Conclusions USDA has established business centers in all of its eight mission areas, and, according to USDA’s Deputy Assistant Secretary for Administration, the department regularly reviews data on administrative services, including services provided by the business centers. However, the department has not systematically assessed whether USDA’s ability to deliver its administrative services has improved since the establishment of its business center reforms or whether the reforms are meeting the policy goals that the Secretary intended them to achieve. Importantly, the department has not assessed the impact that the business centers have had on USDA’s customer service; human resources, including hiring; and overall functionality. Our prior work has shown that a key practice to consider during agency reform efforts is the establishment of clear outcome-oriented goals and performance measures to assess the reform’s effectiveness and impact. The department has set goals for a limited number of administrative services, including hiring, the number of fleet vehicles, and travel and conference spending, but it has not developed goals and measures to more broadly assess the effectiveness and impact of the business center reforms. Developing such goals and measures and using them to assess the effectiveness and impact of the business center reforms could help the department (1) determine whether the reforms are meeting the Secretary’s overarching policy goals and (2) identify whether the reforms have enabled mission areas to improve the delivery of their administrative services by, for example, reducing costs, reducing processing times, or identifying previously unknown issues that need to be addressed. Recommendation for Executive Action The Secretary of Agriculture should direct Departmental Administration to work with the mission areas to develop department-level outcome- oriented performance goals and related measures for the business centers, and use them to assess the effectiveness and impact of the business center reforms. (Recommendation 1) Agency Comments We provided a draft of this report to USDA for comment. In an email, a Senior Advisor in USDA’s Office of Operations stated that USDA agreed with our recommendation about assessing the effectiveness and impact of the business centers. In addition, in comments, reproduced in appendix II, USDA generally agreed with the findings in our draft report. USDA stated that to address our recommendation, the department is evaluating options for the development of performance metrics and inclusion of these metrics and related information as part of the regular and recurring reviews by the department’s Deputy Secretary who is identified as the Chief Operating Officer. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Fu nding for the F and Conservation Business Center, Including Information Technology Modernization Since the U.S. Department of Agriculture (USDA) established the Farm Production and Conservation (FPAC) Business Center in October 2018, Congress has appropriated a total of about $294 million to USDA for necessary expenses of the FPAC Business Center. USDA has also approved $1.1 million for efforts to modernize information technology at the center through fiscal year 2020. USDA Has Funded the FPAC Business Center with Discretionary and Mandatory Appropriations USDA has supported the FPAC Business Center with discretionary and mandatory appropriations. USDA budget documents and congressional report language indicate that these appropriations have been accompanied by corresponding reductions in funding to the other three agencies within the FPAC mission area—the Farm Services Agency (FSA), Natural Resources Conservation Service (NRCS), and Risk Management Agency (RMA). For fiscal year 2018, the Consolidated Appropriations Act, 2018, provided discretionary appropriations of about $1.0 million to the FPAC Business Center and further provided for the transfer into the FPAC Business Center account of another $145,000 in mandatory appropriations. Subsequent USDA budget justification documents state that the $145,000 included funds directed towards three NRCS programs—the Environmental Quality Incentives Program (EQIP), Conservation Stewardship Program (CSP), and Agricultural Conservation Easement Program (ACEP). As shown in table 3, for fiscal year 2019, the Consolidated Appropriations Act, 2019, provided for the FPAC Business Center to receive discretionary appropriations of about $216.4 million, an amount that an accompanying conference report states was offset by reductions to the appropriations for administrative functions in FSA, NRCS, and RMA; a transfer of about $16.1 million in discretionary appropriations from FSA’s Agricultural Credit Insurance Fund Program Account; and a transfer of about $60.2 million in mandatory appropriations that, according to USDA officials, came from the same three NRCS programs as in 2018 (EQIP, CSP, and ACEP). According to USDA officials, prior to the establishment of the FPAC Business Center, these funds were used to support the salaries of FSA, NRCS, and RMA personnel performing functions and tasks similar to those provided by the business center and for general operating costs such as rents, information technology, travel, and training expenses. The FPAC Business Center plans its spending and tracks its obligations using standard categories, including personnel compensation, benefits, travel, transportation, postage, contracts, supplies, and equipment. As shown in table 4, the FPAC Business Center planned to spend funds only for personnel compensation and benefits in fiscal year 2018. According to data provided by USDA, the business center obligated about $995,000 of the nearly $1.2 million in available funds, and those obligations were entirely for personnel compensation and benefits. In fiscal year 2019, the business center planned to obligate nearly 74 percent of the $292.7 million in available funds on personnel compensation and benefits, about 18 percent on contracts, about 8 percent on travel, and the rest on other activities. According to USDA officials, through the end of the fiscal year, the business center had obligated approximately $272 million, or about 93 percent, of its available funds. USDA Has Approved $1.1 Million in FPAC Business Center Information Technology Modernization Efforts through Fiscal Year 2020 For fiscal years 2018 through 2020, USDA approved an investment of $10 million for information technology modernization across all FPAC agencies, including the following two efforts to modernize information technology in the FPAC Business Center at an estimated cost of $1.1 million: The Modernized Directives System, approved at a cost of $600,000. According to USDA officials, the FPAC Business Center is funding this project from its salaries and expenses budget. According to USDA documents, the business center’s Management Services Division wants to provide all FPAC employees an online tool to create, authorize, disseminate, and manage all of the agency’s policy directives in an FPAC Consolidated Directives Repository while minimizing the costs of operations. According to the agency, the tool would streamline the tasks performed by the division’s administrative staff. FPAC plans to gauge the success of the effort by measuring adoption of the new tool by employees, stakeholders, and the public. The National Office Information System, approved at a cost of $500,000. According to USDA officials, $41,000 of that amount is from the FPAC Business Center’s budget for salaries and expenses, while the remaining $459,000 is funded by the other three FPAC agencies. According to USDA documents, this operations support system would improve the agency’s ability to respond in a timely manner to congressional and departmental inquiries and meet reporting requirements from the Office of Management and Budget and other oversight organizations. According to FPAC Business Center officials, the business center obligated $600,000 and $41,000, respectively, toward these two projects in fiscal year 2019. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Nico Sloss (Assistant Director), Stephen Cleary (Analyst in Charge), Ross Campbell, Caitlin Dardenne, Juan Garay, Scott Heacock, Serena Lo, Cynthia Norris, Lauren Ostrander, and Sara Sullivan made key contributions to this report.
Why GAO Did This Study With budget authority of $146 billion in fiscal year 2018, USDA employs nearly 100,000 people organized into 13 major staff offices and eight mission areas comprising 18 agencies. In a November 2017 memorandum, the Secretary of Agriculture called for establishment of a business center in each mission area to provide consolidated administrative services. The memorandum identified three policy goals for these reforms: (1) improve customer engagement, (2) maximize efficiency, and (3) improve agency collaboration. The Agriculture Improvement Act of 2018 includes a provision for GAO to report on USDA's business centers. Among other things, this report examines the extent to which USDA has (1) established business centers and (2) assessed the effectiveness and impact of these business centers. GAO reviewed USDA documents and interviewed officials from USDA's Office of the Assistant Secretary for Administration, Office of Budget and Program Analysis, and eight mission areas about their efforts. GAO also interviewed representatives of USDA employee unions and USDA's external customers, such as farmers, for their perspectives on the establishment of the business centers. What GAO Found The U.S. Department of Agriculture (USDA) has established business centers to provide consolidated administrative services such as human resources and information technology in each of its eight mission areas, in keeping with reforms called for in a November 2017 memorandum from the Secretary of Agriculture. The business centers vary in when they were established; three preceded the Secretary's memorandum (see figure). Typically, each business center is located within one of the mission area's component agencies, and the center's leader reports directly to agency leadership. According to a USDA official, the department regularly reviews data on administrative services, including services provided by the business centers. However, the department has not assessed the effectiveness and impact of its business centers and as of November 2019, did not plan to do so. Beginning in 2018, USDA created an online monitoring system to compile data on the status of administrative services, with “dashboards” displaying data specific to different administrative services, among other things. However, the department has not used dashboards or associated metrics to assess the effectiveness and impact of the business centers, including their impact on USDA's customer service; human resources, including hiring; and overall functionality. GAO's prior work has shown that a key practice to consider during an agency's reform efforts is establishing clear outcome-oriented goals and performance measures to assess the reform's effectiveness and impact. Developing appropriate performance goals and systematically assessing the effectiveness and impact of the business center reforms could help the department determine whether the reforms are meeting the Secretary's overarching policy goals and improving the delivery of administrative services to support the department's mission and program goals. What GAO Recommends GAO recommends that USDA establish department-level outcome-oriented performance goals and related measures for the business centers, and use them to assess the effectiveness and impact of the business center reforms. USDA agreed with the recommendation.
gao_GAO-20-389
gao_GAO-20-389_0
Background The Air Force has identified ABMS as its solution to support broad Department of Defense (DOD) efforts to develop Joint All-Domain Command and Control (JADC2) capabilities. These capabilities will eventually allow U.S. forces from all of the military services, as well as allies, to conduct military operations across all warfighting domains. Command and control is the collection and sharing of information to enable military commanders to make timely, strategic decisions; take tactical actions to meet mission goals; and counter threats to U.S. assets. Figure 1 shows the concept of DOD operations within a joint all-domain environment. When the Air Force began planning for ABMS in 2017, officials stated the intent was to replace and modernize the capabilities of the Airborne Warning and Control System (AWACS), which provides the warfighter with the capability to detect, identify, and track airborne threats, among other capabilities. According to officials, the Air Force currently plans to operate AWACS aircraft through 2035. In July 2018, the DOD Joint Requirements Oversight Council approved an ABMS Initial Capabilities Document that describes which capabilities would need to be developed and which associated gaps in current capabilities the Air Force would need to address. According to Air Force officials, after the Initial Capabilities Document was approved, the Air Force determined that its planned approach to ABMS was no longer compatible with the most recent National Defense Strategy, released in January 2018. The 2018 National Defense Strategy outlines DOD’s strategy for maintaining the defense of the United States based on new and reemerging threats from competitors, such as Russia and China. It also defines expectations for how DOD and its military departments should be prepared to engage those threats during future conflicts: forces would be expected to strike a diverse range of targets inside adversarial air and missile defense networks; forces would need capabilities to enhance close combat lethality; and DOD would prioritize investments that enabled ground, air, sea, and space forces to deploy, operate, and survive in all domains while under attack. Air Force officials stated that these expectations led the department to reassess requirements for ABMS and assess new options for developing more robust and survivable systems that could operate within contested environments. For example, the Air Force officially canceled a recapitalization program for the Joint Surveillance Target Attack Radar System (JSTARS)—an aircraft that provides surveillance and information on moving ground targets—in December 2018. The cancellation was linked to the 2018 National Defense Strategy, which calls for a more survivable and networked solution, among other things. A June 2018 Air Force report to Congress identified concerns regarding the survivability of the JSTARS aircraft in a contested environment and stated that the Air Force was instead planning for ABMS to eventually provide JSTARS’s capabilities. The Air Force determined that it could continue using some of its JSTARS aircraft into the 2030s. Officials stated the Air Force subsequently changed the scope and intent of ABMS to align with the 2018 National Defense Strategy and broader requirements for JADC2. According to senior Air Force officials, they concluded that, to align with the new defense strategy, ABMS needed to do far more than replace AWACS and JSTARS. They also concluded that no single platform, such as an aircraft, would be the right solution to providing command and control capabilities across multiple domains. In an April 2019 congressional testimony, the Air Force announced a new vision for ABMS as a multidomain command and control family of systems enabling operations in air, land, sea, space, and cyber domains. In that testimony, Air Force leadership explained the need to move away from a platform-centric approach (such as JSTARS) to a network-centric approach, one that connects every sensor to every shooter. The Air Force, however, did not formally document its decision to change the scope of ABMS. In November 2019, according to Air Force officials, ABMS was determined to be the Air Force solution for JADC2 in response to a July 2019 Joint Requirements Oversight Council memo outlining DOD requirements for command and control systems requirements across all domains. In May 2019, we reported that Air Force leadership determined that it would not designate ABMS as a major defense acquisition program because it would be a family of systems. The Air Force also determined that ABMS would be directed by a Chief Architect working across PEOs, rather than a traditional acquisition program manager. According to Air Force officials, the Chief Architect role will be instrumental in integrating the various programs and technologies into an overall system and is the first of its kind within the Air Force. Additionally, Air Force officials stated that they intend to use a flexible acquisition approach to develop ABMS, one that is outside of traditional pathways such as a major defense acquisition program or middle tier acquisition. According to the Chief Architect, this approach will allow ABMS to develop and rapidly field capabilities. Specifically, the Air Force intends to break up technology development into many short-term efforts, generally lasting 4 to 6 months each. The Chief Architect stated that the goal of breaking up development into smaller increments is to increase innovation by requiring multiple contractors—including those that may not usually engage with DOD—to compete for contracts more frequently. These short-term efforts will include prototyping and demonstrations to prove that the capabilities work. Those that are proven will be delivered to the warfighter. By using this approach, the Air Force intends to field capabilities sequentially and more quickly than if all were developed and delivered at one time as is typically done for traditional acquisitions. Additionally, Air Force officials indicated that this approach will not lock the Air Force into long-term development efforts with just one contractor and will allow the Air Force to more easily move on from unsuccessful development efforts. The Air Force Has Not Established a Business Case for ABMS, Increasing Development Risks The Air Force has not established a plan or business case for ABMS that identifies its requirements, a plan to attain mature technologies when needed, a cost estimate, and an affordability analysis. As a result of recent ABMS management and scope changes, the Air Force remains early in the planning process and has not yet determined how to meet the capabilities or identify systems that will comprise ABMS. In December 2019, Air Force officials stated an overall plan for ABMS did not exist and would be difficult for the Air Force to develop in the near term due to the unclear scope of ABMS requirements. To date, the Air Force has not identified a development schedule for ABMS, and it has not formally documented requirements. As previously stated, ABMS will be managed as a family of systems and not as a traditional acquisition program typically governed by DOD Instruction 5000.02, nor as a middle tier acquisition. As a result, Air Force officials initially told us that they did not intend to develop most of the typical acquisition documentation, such as a cost estimate, that is generally required of major defense acquisition programs before entering the development phase. In March 2020, after we sent a copy of this report to DOD for comment, the Air Force provided us a draft tailored acquisition plan for ABMS in lieu of an acquisition strategy. Based on our initial review, this document includes some elements of a traditional acquisition strategy, such as contract and test strategies. However, this tailored acquisition plan does not include key information such as the overall planned capabilities and estimated cost and schedule for ABMS. We will continue to monitor the Air Force’s planning efforts as the program progresses. The Air Force also began preparing an analysis of alternatives in January 2019 to assess options for delivering capabilities such as surveilling moving targets and battle management command and control. The Air Force expects to complete the analysis in 2020, but Air Force officials expect it will inform only some aspects of ABMS planning. The Air Force has not defined what additional planning documentation it will develop to help it establish a business case for ABMS. For example, major defense acquisition programs are generally required to develop acquisition planning documents, such as a cost estimate. We have previously reported on the importance of establishing a solid, executable business case before committing resources to a new development effort. A business case demonstrates that (1) the warfighter’s needs are valid and that they can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources. In addition to an acquisition strategy, other basic elements of a sound acquisition business case include firm requirements, a plan for attaining mature technologies, and a reliable cost estimate and affordability analysis, further described below. 1. Firm requirements are the requisite technological, software, engineering, and production capabilities needed by the user. Acquisition leading practices state that requirements should be clearly defined, affordable, and informed. Deciding how best to address requirements involves a process of assessing trade-offs before making decisions. Unstable or ill-defined requirements can lead to cost, schedule, and performance shortfalls. 2. A plan to attain mature technologies when needed is critical in establishing that technologies can work as intended before integration into a weapon system. The principle is not to avoid technical risk but rather address risk early and resolve it ahead of the start of product development. Identifying technologies and defining a plan to ensure mature technologies can be attained when needed help guide development activities and enable organizations to track development and inform decisions on next steps. 3. A reliable cost estimate and affordability analysis are critical to the successful acquisition of weapon systems. GAO’s Cost Estimating and Assessment Guide states that a reliable cost estimate is comprehensive, well-documented, accurate, and credible. Leading practices have shown that realistic cost estimates allow program management to obtain the knowledge needed to make investment decisions and match requirements with resources. A cost estimate is the basis of an affordability analysis, which validates whether a program’s budget is adequate for the planned acquisition strategy. The process of developing and documenting a business case builds knowledge needed to match customer needs with available resources, including technologies, timing, and funding. The fact that the Air Force does not plan to establish such a business case for ABMS increases the risk of cost and schedule overruns and may impact Congress’s ability to exercise its oversight responsibilities. The status of key elements for the ABMS business case follows: Status of requirements. The Air Force has not established well-defined, firm requirements for ABMS, but Congress required that the Air Force start defining requirements for the networked data architecture necessary for ABMS to provide multidomain command and control and battle management capabilities by June 2020. The Air Force has not defined the changes in ABMS’s requirements, such as the need to provide multidomain command and control capabilities in support of joint operations. As a result, the only existing documentation of ABMS’s requirements resides in the ABMS Initial Capabilities Document from 2018, which generally focuses on the capabilities needed to replace AWACS. That document does not address the expanded JADC2 requirements and capabilities ABMS is expected to eventually fulfill. Air Force officials stated that ABMS requirements and the family of systems, or programs, that compose ABMS will be defined over time as they gain more knowledge. Given the lack of specificity regarding ABMS, Congress has kept a close eye on the effort and has implemented several reporting requirements. Since 2018, the Air Force has been required to provide quarterly updates to the defense committees on the status of ABMS development and associated technologies. In addition, the National Defense Authorization Act for Fiscal Year 2020 required the Air Force to provide ABMS-related documentation that describes certain requirements, a development schedule, and the current programs that will support ABMS, among other things, by June 2020. While the Air Force has not established firm requirements for ABMS to date, it has informally identified some broad requirements. For example, the Air Force anticipates that ABMS will provide interoperability between systems, present real-time information to military decision makers, and fully utilize the range of sensor data and capabilities across DOD to create a common battlespace operational picture. In addition, Air Force officials stated that ABMS would be developed as a government-owned open architecture family of systems, which would allow any system to be integrated into ABMS. The Air Force has identified seven different development categories that it plans to simultaneously address to meet its broad ABMS requirements. According to the Air Force, the categories are not intended to be comprehensive and may change as development progresses. These development categories include: Apps Although the Air Force has not defined these seven development categories, it has identified 28 development areas that fit within the categories. For example, one of these development areas, which falls under the “secure processing” category, is called cloudONE. It is intended to store and process data using a cloud infrastructure for multiple levels of classified and unclassified data. These development areas will eventually compose the architecture and technologies that make up ABMS. In January 2020, the Air Force provided us with a draft version of high-level descriptions of the 28 development areas; however, the document did not fully define the requirements or capabilities for the development areas nor identify which organizations would lead each effort. For example, the cloudONE description does not indicate specific technical requirements that must be met, such as amount of storage, the number of users, or data transmission rate. Although ABMS requirements are not fully defined, the Air Force awarded several short-term development contracts for ABMS. According to Air Force officials, these efforts are intended to show that its nontraditional development approach is feasible rather than to develop specific capabilities that will be integrated into ABMS. For example, the Air Force awarded several development contracts totaling approximately $8 million for gatewayONE, one of the 28 development areas that is intended to enable communication between platforms. As part of this effort, the Air Force conducted a demonstration in a joint military exercise in December 2019. While the exercise demonstrated some data transfer capability, it did not directly address the intent of gatewayONE to enable communication between multiple platforms using government-owned systems. According to Air Force officials, ongoing and future efforts will allow the Air Force to better define ABMS requirements and determine what existing and emerging technologies can fulfill those capabilities. The Air Force has not determined what development efforts will follow these early demonstration efforts, in part because it has not fully defined its requirements. Status of plan to attain mature technologies when needed. The Air Force has started development activities without first identifying what technologies are needed for the 28 development areas for ABMS. According to Air Force officials, they do not plan to identify all technologies needed while pursuing development activities. Therefore, the Air Force cannot assess whether technologies required for ABMS are mature or determine the necessary steps to ensure those technologies are mature when needed. Air Force officials stated that as ABMS development progresses, they plan to select commercially available or other mature technologies for integration. However, without first identifying the technologies it needs, the Air Force cannot develop a plan, or technology roadmap, with detailed actions to ensure those technologies will be mature when needed. For example, the Air Force plans for ABMS to assume the capabilities of AWACS and JSTARS aircraft, which are set to retire in the 2030s. However, the Air Force has not defined the technologies ABMS will need or established a roadmap to ensure those technologies are mature before the retirement of legacy aircraft. This increases the risk that the requisite technologies will not be mature before the Air Force must integrate them into ABMS, which increases the likelihood that those capabilities will not be developed when needed. The Chief Architect and other Air Force senior leaders stated that the ABMS development effort is an ambitious undertaking for the Air Force. Our prior work has found that some DOD programs related to ABMS development have posed challenges in the past, in part because technologies were not sufficiently mature when needed, as shown in table 1. Additionally, the Office of Cost Assessment and Program Evaluation assessed previous DOD programs that were similar to ABMS development and noted that the scope of ABMS will be larger than any of those individual programs. Officials from that office concluded that ABMS is a high-risk effort and the Air Force has not provided sufficient programmatic detail. As a result, they could not conclude that the Air Force would be able to overcome the cost, schedule, and performance challenges of these past programs. Air Force officials stated that the Air Force’s approach to ABMS development will avoid these past challenges because only mature technologies will be integrated into ABMS and the Air Force is expected to frequently evaluate development progress. However, since the Air Force has not identified what the technology needs for ABMS are, it cannot yet determine if those technologies are mature or will be mature when needed. We have previously found that starting development without first identifying and assessing the maturity of technologies increases the likelihood that those technologies are not mature when needed, which often results in cost overruns and schedule delays. Status of cost estimate and affordability. The Air Force has not developed a cost estimate for ABMS or an affordability analysis. According to the GAO Cost Estimating and Assessment Guide, even in cases where limited information is available, cost estimates should still be developed to inform budget requests. To date, the Air Force has requested nearly $500 million for ABMS efforts through fiscal year 2021. The Air Force, however, currently has no plans to develop a life-cycle cost estimate, which would provide a comprehensive account of ABMS costs, or an independent cost estimate, which would confirm the credibility of estimated costs. Officials stated that the Air Force has not developed a cost estimate because the capabilities, technologies, and systems that will compose ABMS are still to be determined and will change over time. Officials stated they intend to develop cost estimates for each of the 28 development areas in the future but did not identify a timeline. The GAO Cost Estimating and Assessment Guide acknowledges that cost estimating is more difficult when requirements—and the technologies and capabilities to meet them—are changing and the final product design is not known while the system is being built. In these cases, leading practices call for developing cost estimates that should be updated more frequently to reflect changes in requirements. Without a realistic and current cost estimate for ABMS efforts, the Air Force will be unable to effectively allocate resources and conduct informed long-range investment planning. The Air Force has also not determined if it can afford ABMS. Affordability is the degree to which the funding requirements for an acquisition effort fit within the service’s overall portfolio plan. Whether an acquisition effort is affordable depends a great deal on the quality of its cost estimate and other planned outlays. To conduct an affordability analysis, the budget requirements for the entire portfolio are identified for future years. This can help determine whether the funding needs are relatively stable or if the portfolio will require a funding increase in the future. The GAO Cost Estimating and Assessment Guide states that, as part of the cost estimating process, management should review and approve an affordability analysis to identify any funding shortfalls. Air Force officials stated that the Air Force does not plan to conduct a comprehensive affordability analysis for ABMS because it is managing it as a family of systems. They stated that any costs to the Air Force will be determined in the future by the various organizations that manage the systems that will eventually support ABMS. However, without an affordability analysis, the Air Force will be unable to determine whether it can commit sufficient resources for ABMS in future years. Air Force Has Established an ABMS Management Structure, but Decision-Making Authorities Are Unclear While the Air Force has taken some steps to establish an ABMS management structure, the authorities of Air Force offices to plan and execute ABMS efforts are unclear. Internal controls, which provide standards on effective management of programs, state that management should establish the organizational structure and authority necessary to enable the entity to plan, execute, control, and assess the organization in achieving its objectives. The Air Force, however, has not fully defined or communicated ABMS decision-making authorities to Air Force offices, and documentation to date regarding ABMS management has been limited. Several Air Force offices are involved in ABMS management, as shown in figure 2. Air Force Acquisition. This office is headed by the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, who is generally responsible for all acquisition functions within the Air Force. In an October 2018 memorandum, Air Force Acquisition established the position of the Chief Architect and stated that any unresolved ABMS issues between the Chief Architect and PEOs are to be brought to Air Force Acquisition for resolution. Chief Architect. The Air Force established this position in October 2018 to execute the overarching vision and strategy for ABMS. According to the Air Force, the Chief Architect will determine the overall design of ABMS, coordinate with the service-level commands and the acquisition programs involved to ensure their efforts are aligned with the overall design and development of ABMS, and identify the enabling technologies that will compose the ABMS family of systems. An October 2018 memorandum stated that individual PEOs and program managers that oversee programs supporting ABMS will retain all authority and responsibility for executing their respective programs. In November 2019, Air Force Acquisition issued additional ABMS management guidance that stated that the Chief Architect would select and fund ABMS development projects for PEOs to execute. However, the guidance did not address whether the Chief Architect has authority to direct the execution of efforts initiated and originally funded by the PEOs, which may support ABMS. Specifically, there is no documentation to clarify whether the Chief Architect would have the authority to realign PEO priorities or funding for ABMS projects. For example, the PEO for Space is currently executing a data integration project, which aligns with the cloudONE development area. Although some ABMS funds have been obligated for this project, there is no documentation to support that the Chief Architect will be able to direct the PEO to change the project objectives or timeline to align with ABMS requirements once they are defined. Air Force Warfighting Integration Capability (AFWIC). In October 2017, the Air Force established AFWIC. According to Air Force officials, AFWIC will ensure forces are operationally ready to perform JADC2 missions using ABMS technologies. According to an AFWIC senior official, in April 2019 AFWIC began leading multidomain command and control efforts for the Air Force. An October 2018 memorandum directed the Chief Architect to coordinate with AFWIC regarding the development of ABMS. Other documentation on ABMS execution indicates that AFWIC will also coordinate with major commands on Air Force doctrine and operations in support of ABMS. However, the documentation did not further define this coordination or indicate whether AFWIC would have any authority in directing ABMS activities. Chief Architect Integration Office. In December 2019, the Air Force established the Chief Architect Integration Office at Wright-Patterson Air Force Base to coordinate and integrate ABMS development efforts across PEOs and other organizations. Air Force officials stated that this office is in the process of being staffed and the roles and responsibilities still need to be formalized. However, as currently envisioned, this office would lead technology development risk reduction efforts by working with the PEOs and other organizations, such as federally funded research and development centers, to conduct ABMS demonstrations and prototypes. Air Force officials told us the Chief Architect Integration Office is expected to resolve issues across Air Force organizations, such as sharing of resources and personnel. An Air Force Life Cycle Management Center- led task force is currently developing an overall strategy for the office, to include resource and organizational requirements. Air Force officials stated that a proposed strategy will be completed in March 2020. Until the Chief Architect Integration Office has been fully established, it is unclear whether the office will have the required authorities to execute the mission of integrating ABMS development efforts across the Air Force. Air Force officials stated that the decision-making authorities across these offices will be developed over time. According to officials, details on these authorities have not been developed or communicated to the offices supporting ABMS and the Air Force has not established a timeline for doing so. The Air Force expects that multiple organizations within the Air Force will be responsible for executing ABMS development efforts. Internal controls, which provides standards for effective management of programs, states that organizational structure and authority is necessary to plan, execute, and assess progress. The absence of fully defined and documented decision-making authorities, which are communicated to all those involved, increases the risk to the Air Force’s ability to successfully plan, execute, and assess ABMS development efforts. Conclusions The Air Force started ABMS development activities without a business case that defines ABMS requirements, a plan to ensure technologies are mature when needed, a cost estimate, and an affordability analysis. Developing these key elements of a business case helps to build a solid foundation for any successful technology and product development effort, even one using a nontraditional acquisition approach. Congress has already required the Air Force to define and report on certain ABMS requirements, among other aspects of ABMS planning, by June 2020. However, the Air Force does not intend to develop the other elements of a business case, even though it is requesting over $300 million for ABMS development activities in fiscal year 2021. Given the criticality of the battle management command and control mission and the planned retirement of legacy programs, the lack of an ABMS business case introduces uncertainty regarding whether the needed capabilities will be developed within required time frames. For example, without a plan to mature technologies needed to field ABMS capabilities, the Air Force cannot be certain those technologies will be ready when needed. While it may be difficult for the Air Force to formulate a complete ABMS business case at this time, due to the recent changes in ABMS’s scope, the Air Force is not precluded from beginning the process of defining and formalizing a business case. As ABMS continues to evolve, so too can the Air Force’s business case. For example, the Air Force does not yet know the total life cycle costs of ABMS, but it could provide Congress with a cost estimate based on its knowledge today and update the cost estimate over time. This would allow the Air Force to assess whether ABMS is affordable. Furthermore, the Air Force is already required to provide quarterly briefs to congressional defense committees on the status of ABMS, which affords the Air Force the opportunity to present Congress with information on its ABMS business case and explain any changes over time. Specifically, including updates on the scope of the Air Force’s plans to ensure ABMS will have mature technologies when needed, an overall cost estimate, and an affordability assessment would provide important information to Congress. Finally, the Air Force has started to execute ABMS development efforts without clearly defined decision-making authorities that have been communicated to the offices supporting those efforts. The absence of these defined authorities may hinder management’s ability to execute and assess ABMS development across multiple organizations within the Air Force. Recommendations for Executive Action We are making the following four recommendations to the Secretary of the Air Force to direct the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics: The Assistant Secretary of the Air Force for Acquisition, Technology and Logistics should direct the Chief Architect to develop a plan to attain mature technologies when needed for each ABMS development area, which includes an initial list of technologies and an assessment of their maturity that is updated to reflect changes, and update Congress quarterly. (Recommendation 1) The Assistant Secretary of the Air Force for Acquisition, Technology and Logistics should direct the Chief Architect to prepare a cost estimate that is developed in accordance with cost estimating leading practices, to include regularly updating the estimate to reflect ABMS changes and actual costs, and update Congress quarterly. (Recommendation 2) The Assistant Secretary of the Air Force for Acquisition, Technology and Logistics should direct the Chief Architect to prepare an affordability analysis that should be regularly updated, and update Congress quarterly. (Recommendation 3) The Assistant Secretary of the Air Force for Acquisition, Technology and Logistics should formalize and document acquisition authority and decision-making responsibilities of the Air Force offices involved in the planning and execution of ABMS, to include the Chief Architect. This document should be included as part of the submission to Congress in June 2020 and communicated to the Air Force offices that support ABMS. (Recommendation 4) Agency Comments We provided a draft of this product to the Department of Defense for comment. In its comments, reproduced in appendix I, the Department of Defense concurred with our recommendations. We will continue to monitor the Air Force’s actions to respond to these recommendations. We are sending copies of this report to the appropriate congressional committees. We are also sending a copy to the Secretary of Defense, the Secretary of the Air Force, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have questions, please contact me at (202) 512- 4841 or MakM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Department of Defense Comments Appendix II: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact above, the following staff members made key contributions to this report: Justin Jaynes, Assistant Director; Jessica Karnis, Analyst-in-Charge; and Lauren Wright. Other contributions were made by Brian Bothwell, Rose Brister, Brian Fersch, Miranda Riemer, Megan Setser, Hai Tran, and Robin Wilson.
Why GAO Did This Study The Air Force's ABMS is a family of systems intended to replace the command and control capabilities of aging legacy programs and develop a network of intelligence, surveillance, and reconnaissance sensors. Air Force officials stated ABMS has received $172 million in funding through fiscal year 2020 for efforts related to ABMS. The Air Force is not designating ABMS as a major defense acquisition program or a middle tier acquisition program. Congress included a provision in statute for GAO to review the status of ABMS. This report examines the extent to which the Air Force has (1) established a plan for ABMS development and (2) defined management and decision-making authorities for ABMS efforts. To conduct this assessment, GAO reviewed ABMS program documentation and interviewed Air Force officials. What GAO Found The Air Force's Advanced Battle Management System (ABMS) is intended to establish a network to connect sensors on aircraft, drones, ships, and other weapon systems to provide a real-time operational picture on threats across all domains, as depicted below. According to Air Force officials, the department will take a nontraditional approach to develop ABMS through short-term efforts that will enable it to rapidly field capabilities. As a result of this approach, ABMS requirements will change over time as development progresses. The Air Force started ABMS development without key elements of a business case, including: firm requirements to inform the technological, software, engineering, and production capabilities needed; a plan to attain mature technologies when needed to track development and ensure that technologies work as intended; a cost estimate to inform budget requests and determine whether development efforts are cost effective; and an affordability analysis to ensure sufficient funding is available. GAO's previous work has shown that weapon systems without a sound business case are at greater risk for schedule delays, cost growth, and integration issues. Congress has kept a close eye on the effort and required quarterly briefings on its status, as well as a list of certain ABMS requirements by June 2020. However, given the lack of specificity that remains regarding the Air Force's ABMS plans, Congress would benefit from future briefings that address the missing business case elements. While the Air Force has taken some steps to establish an ABMS management structure, the authorities of Air Force offices to plan and execute ABMS efforts are not fully defined. Unless addressed, the unclear decision-making authorities will hinder the Air Force's ability to effectively execute and assess ABMS development across multiple organizations. What GAO Recommends GAO is making four recommendations, including that the Air Force should develop and brief the Congress quarterly on a plan to mature technologies, a cost estimate, and an affordability analysis. In addition, the Air Force should formalize the ABMS management structure and decision-making authorities. The Air Force concurred with the four recommendations. GAO will continue to monitor the Air Force's actions to address these recommendations.
gao_GAO-19-483
gao_GAO-19-483_0
Background The Rulemaking Process under the APA Under the APA, agencies engage in three basic phases of the rulemaking process: they initiate rulemaking actions, develop proposed rulemaking actions, and develop final rulemaking actions. Built into agencies’ rulemaking processes are opportunities for internal and external deliberations, reviews, and public comments. Figure 1 provides an overview of the rulemaking process. The public comment portion of the rulemaking process generally comprises three phases: 1. Comment Intake: During this phase, agencies administratively process comments. This may include identifying duplicate comments (those with identical or near-identical comment text, but unique identity information), posting comments to the agency’s public website, and distributing comments to agency subject-matter experts within responsible program offices for analysis. 2. Comment Analysis: During this phase, subject-matter experts analyze and consider submitted comments. This may include the use of categorization tools within FDMS or outside software systems. 3. Comment Response: During this phase, agencies prepare publicly available responses to the comments in accordance with any applicable requirements. Agencies are required to provide some response to the comments in the final rule, but in some cases, an agency may also prepare a separate report to respond to the comments. Legal Requirements for Public Comments As illustrated in figure 1 above, the public has the opportunity to provide input during the development of agencies’ rules. Among other things, the APA generally requires agencies to publish an NPRM in the Federal Register; allow any interested party an opportunity to comment on the rulemaking process by providing “written data, views, or arguments”; issue a final rule accompanied by a statement of its basis and purpose; and publish the final rule at least 30 days before it becomes effective. The APA requires agencies to allow any interested party to comment on NPRMs. The APA does not require the disclosure of identifying information from an interested party that submits a comment. Agencies therefore have no obligation under the APA to verify the identity of such parties during the rulemaking process. Instead, the APA and courts require agencies to consider relevant and substantive comments, and agencies must explain their general response to them in a concise overall statement of basis and purpose, which in practice forms part of the preamble of the final rule. Courts have explained that significant comments are comments that raise relevant points and, if true or if adopted, would require a change in the proposed rule. However, courts have held that agencies are not required to respond to every comment individually. Agencies routinely offer a single response to multiple identical or similar comments. As explained by Regulations.gov’s “Tips for Submitting Effective Comments,” “the comment process is not a vote,” and “agencies make determinations for a proposed action based on sound reasoning and scientific evidence rather than a majority of votes. A single, well-supported comment may carry more weight than a thousand form letters.” The APA includes provisions on the scope of judicial review that establishes the bases under which a court shall find an agency’s action unlawful. Among these APA bases are when the court finds that agency action, findings, and conclusions were “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law” and “without observance of procedure required by law.” How an agency managed and considered public comments may be relevant during judicial review. For example, one basis for a court’s reversal of an agency action has been that, upon review of the statement of basis and purposes, the court concludes the agency failed to consider or respond to relevant and significant comments. Conversely, courts have upheld agency rules when the courts have found the statement of basis and purposes demonstrate the agency considered the commenter’s arguments. The E-Government Act of 2002 The E-Government Act of 2002 requires agencies, to the extent practical, to accept comments “by electronic means” and to make available online the public comments and other materials included in the official rulemaking docket. Executive Order 13563 further states that regulations should be based, to the extent feasible, on the open exchange of information and perspectives. To promote this open exchange, to the extent feasible and permitted by law, most agencies are required to provide the public with a meaningful opportunity to participate in the regulatory process through the internet, to include timely online access to the rulemaking docket in an open format that can be easily searched and downloaded. Most agencies meet these responsibilities through Regulations.gov, a rulemaking website where users can find rulemaking materials and submit their comments, but all agencies are not required to use that platform. In October 2002, the eRulemaking Program was established as a cross-agency E-Government initiative and is currently based within EPA. The eRulemaking PMO leads the eRulemaking Program and is responsible for developing and implementing Regulations.gov, the public- facing comment website, and FDMS, which is the agency-facing side of the comment system used by participating agencies. As of March 2018, Regulations.gov identified 180 participating and 128 nonparticipating agencies. These agencies may be components of larger departmental agencies. Some nonparticipating agencies, including FCC and SEC, have their own agency-specific websites for receiving public comments. The comment systems within the scope of this report are as follows: FDMS and Regulations.gov: FDMS is federal government-wide document management system structured by dockets (or file folders) that offer an adaptable solution to service a wide range of regulatory activities routinely performed by federal agencies. The public-facing website of FDMS is Regulations.gov, which is an interactive website that allows the public to make comments on regulatory documents, review comments submitted by others, and access federal regulatory information. Regulations.gov allows commenters to submit comments to rulemakings by entering information directly in an electronic form on the Regulations.gov website. This form also allows commenters to attach files as part of their comment submission, and can be customized by each participating agency. Appendix II provides an example of one comment form from Regulations.gov. Additionally, all participating agencies allow comments to be submitted by mail or hand delivery. At their discretion, some participating agencies also allow comments to be submitted via email. See table 1. FCC’s Electronic Comment Filing System (ECFS): ECFS is a web- based application that allows anyone with access to the internet to submit comments to FCC rulemaking proceedings. ECFS allows commenters to submit comments to rulemakings through two main avenues: brief text comments submitted as Express filings, and long- form comments submitted as Standard filings. Both types of filings can be submitted through an ECFS comment form, which requires commenters to enter information directly into an electronic form on the ECFS website. See appendix III for examples of the comment forms used by ECFS. Additionally, interested parties with the appropriate technical capabilities can submit either type of filing directly to ECFS via a direct application programming interface (API) or through a public API that is registered with the website Data.gov. Filing comments through an API allows interested parties the ability to file a large number of comments without having to submit multiple individual comment forms. Finally, to accommodate a large volume of comment submissions for the 2015 Open Internet rulemaking, FCC allowed interested parties to submit Express comment filings in bulk through formatted CSV files that were submitted via a dedicated email address and then uploaded into ECFS. Similarly, for the 2017 Restoring Internet Freedom rulemaking, FCC allowed commenters to submit Express comment filings in bulk through a dedicated file- sharing website, and the comments were then uploaded into ECFS. With the exception of these two rulemakings, FCC does not allow comments to be submitted electronically outside of ECFS. Figure 2 shows how ECFS facilitates public commenting by using the processes discussed above. SEC’s Comment Letter Log: When SEC requests public comments on SEC rule proposals, the public can submit comments to rulemakings through an online form, which requires commenters to enter information in an electronic form on SEC’s website. This form also allows commenters to attach files as part of their submission. When commenters submit a comment, it is sent to SEC staff as an email. SEC also allows comments to be submitted via email and mail. After review, staff upload the comment and any associated data into the Comment Letter Log, which is the internal database that SEC staff use to manage the public comment process, and post the comment to the public website. See appendix IV for an example of a comment form on SEC’s website. Selected Agencies Collect Some Information from Commenters and Accept Anonymous Comments through Regulations.gov and Agency-Specific Websites Selected Agencies Collect Some Identity Information through Comment Forms Consistent with the discretion afforded by the APA, Regulations.gov and agency-specific comment websites use required and optional fields on comment forms to collect some identity information from commenters. In addition to the text of the comment, each participating agency may choose to collect identity information from the Regulations.gov comment form by requiring commenters to fill in other fields, such as name, address, and email address before they are able to submit a comment. Participating agencies may also choose to collect additional identity information through optional fields. For example, while EPA does not make any fields associated with identity information available to commenters, CFPB makes all fields available and requires that commenters enter something into the first name, last name, and organization name fields before a comment can be submitted. Table 2 shows the fields on Regulations.gov in which each of the participating agencies we analyzed require commenters to enter information and the optional fields available for commenters to voluntarily enter information. FCC requires that all commenters complete the following fields on both the Standard and Express comment forms in ECFS: (1) name, (2) postal address, and (3) the docket proceeding number to which they are submitting a comment. The ECFS comment form also allows commenters to voluntarily provide additional information in optional fields, such as email address. Similarly, SEC’s comment forms require commenters to provide (1) first and last name, (2) email address, and (3) the comment content, before a comment can be successfully submitted. The comment form also allows commenters to voluntarily provide other information in optional fields, such as their city and state. Agencies Accept Anonymous Comments Regardless of the fields required by the comment form, the selected agencies all accept anonymous comments in practice. Specifically, in the comment forms on Regulations.gov, ECFS, and SEC’s website, a commenter can submit a comment under the name “Anonymous Anonymous,” enter a single letter in each required field, or provide a fabricated address. In each of these scenarios, as long as a character or characters are entered into the required fields, the comment will be accepted. Further, because the APA does not require agencies to authenticate submitted identity information, neither Regulations.gov nor the agency-specific comment websites contain mechanisms to check the validity of identity information that commenters submit through comment forms. As part of the Regulations.gov modernization effort, the Office of Information and Regulatory Affairs (within the Office of Management and Budget) and the Department of Justice proposed language for a disclosure statement on every comment form that would require the commenter to acknowledge that they are not using, without lawful authority, a means of identification of another person with any comment they are submitting. Commenters would be required to acknowledge their agreement with the statement before their comment could be submitted. According to PMO officials, even with this disclosure statement, anonymous comments would still be permitted and accepted by Regulations.gov. This disclosure statement was proposed in response to allegations of comments being submitted to rulemakings on behalf of individuals without their permission. As of April 2019, this proposed language has not yet been approved by the Executive Steering Committee for Regulations.gov. However, the proposed disclosure statement would be provided on the Regulations.gov comment form, and it is unclear whether similar information would be made available to commenters submitting comments via email or mail. In contrast to the other selected agencies, according to FCC officials, FCC rules require the submission of the commenter’s name and mailing address, or the name and mailing address of an attorney of record. However, in March 2002, FCC initiated a rulemaking related to the submission of truthful statements to the commission. Among other issues, FCC sought comment on whether rulemaking proceedings should be subject to an already existing rule that prohibited the submission of written misrepresentations or material omissions from entities that are subject to FCC regulation. In its final rule, issued in March 2003, FCC decided to continue to exempt comments to rulemakings from this rule because of the potential that such a requirement would hinder full and robust public participation in such policy-making proceedings by encouraging disputes over the truthfulness of the parties’ statements. According to FCC officials, to comply with APA requirements, the commission tries to minimize barriers that could prevent or discourage commenters from participating in the commenting process, and in practice accepts anonymous comments. See figure 3 for an example of an anonymous comment in ECFS. Additionally, in our survey of program offices with rulemaking responsibilities at selected agencies, 39 of 52 offices reported that they received anonymous comments on some rulemakings for which their office has been responsible since 2013. The remaining 13 offices responded that they did not receive or were unaware of receiving anonymous comments, though most of these offices do not have high levels of rulemaking activity or receive a high volume of comments. Regulations.gov and Agency-Specific Comment Websites Collect Some Information about Public Users’ Interaction Regulations.gov and agency-specific comment websites also collect some information about public users’ interaction with their websites through application event logs and proxy server logs. This information, which can include a public user’s Internet Protocol (IP) address, browser type and operating system, and the time and date of webpage visits, is collected separately from the comment submission process as part of routine information technology management of system security and performance. The APA does not require agencies to collect or verify this type of information as part of the rulemaking process. Regulations.gov collects some information from commenters accessing the website but it is never linked to any specific comment. In Regulations.gov, proxy server logs capture information such as the country from which a user accesses the site, the user’s browser type and operating system, and the time and date of each page visit on the website. According to PMO officials, this information is provided to the eRulemaking PMO in summary statistics that are used to assess what information is of least interest to Regulations.gov visitors, determine technical design specifications of the website, and identify system performance problems. This information is collected about all public users visiting Regulations.gov, regardless of whether they submit a comment. Further, because the PMO receives this information in the form of summary statistics, it cannot be connected to any specific comment. The eRulemaking PMO also monitors IP addresses that interact with Regulations.gov via security firewalls, but, according to PMO officials, the web application firewall (WAF) logs (a type of application event log) have never been connected to specific comments, though in some cases the URL the blocked user was attempting to access may be captured in the log. FCC officials stated that the current ECFS application architecture does not facilitate FCC identifying the source IP address of the submitter of a specific comment filed in ECFS. FCC collects information about public users’ interactions with ECFS through its web-based application proxy server logs, including the IP address from which a user accesses the site and the date and time of the user’s interaction. However, ECFS does not obtain or store IP addresses as part of the comment data it collects when a public user ultimately submits a comment. Within the current architecture, ECFS would require officials to match date and time stamps from the proxy server log to the ECFS comment data to connect a given IP address to a specific comment. SEC officials stated it would be difficult to match the large number of daily hits to their general website to the much smaller number of comments submitted to their rulemaking proceedings. SEC collects information about public users’ interactions with the SEC.gov website through proxy server logs, including the IP address from which a user accesses the website and the user’s date, time, and URL requests. However, according to officials, a public user never directly interacts with the Comment Letter Log, and none of the information from the proxy log is included as part of the data it collects in association with comment submissions. Despite this difficulty, SEC officials stated that linking the proxy log data from the general SEC.gov website to a specific comment in the Comment Letter Log could be done on a case-by-case basis. Most Selected Agencies Have Some Internal Guidance Related to Commenter Identity Seven of 10 selected agencies have documented some internal guidance associated with the identity of commenters during the three phases of the public comment process, but the substance of this guidance varies, reflecting the differences among the agencies and their respective program offices. For example, as shown in table 3, BLM has no internal guidance related to identity information, while CFPB has internal guidance related to the comment intake and response to comments phases. For selected agencies that have guidance associated with the identity of commenters, it most frequently relates to the comment intake or response to comment phases of the public comment process. The guidance for these phases addresses activities such as managing duplicate comments (those with identical or near-identical comment text but varied identity information) or referring to commenters in a final rule. In addition, some agencies have guidance related to the use of identity information during comment analysis. Agencies are not required by the APA to develop internal guidance associated with the public comment process generally, or identity information specifically. For the three selected agencies that did not have identity-related guidance for the public comment process, cognizant officials told us such guidance has not been developed because identity information is not used as part of their rulemaking process. For example, BLM officials stated that the only instance in which identity information would be considered is when threatening comments are referred to law-enforcement agencies. Identity-Related Guidance for Comment Intake According to our analysis of the internal guidance the selected agencies provided, five of the 10 agencies have documented identity-related guidance associated with the comment intake phase. (See table 4.) Identity-related guidance for the comment intake phase addresses posting comments and their associated identity information to public comment websites. The guidance generally falls into two categories: (1) the treatment of duplicate comments (those comments with identical or near-identical content, but unique identity information) and (2) the management of comments reported to have been submitted using false identity information. Four of the 10 selected agencies have documented guidance on defining and posting duplicate comments, which may also be referred to as mass mail campaigns. However, in accordance with the discretion afforded them under the APA, agency definitions of duplicate comments and recommendations on how to manage them during comment intake vary. Specifically, for EBSA and WHD—the selected agencies within the Department of Labor (DOL)—one comment letter with multiple signers is considered one comment, while the same comment submitted by multiple signers as separate letters is counted separately. In both cases, however, each individual signer may provide unique identity information. In contrast, EPA guidance states that mass mail submissions often include attachments containing either bundled duplicate messages or a single comment with multiple signatures. For EPA, each signature is counted as a duplicate comment submission. As of February 2019, CFPB’s draft guidance does not explicitly define duplicate comments, but it does note that “duplicate identical submissions” are not subject to the agency’s policy of posting all comments. Instead, the official responsible for managing the docket during comment intake may remove duplicate comments from posting or decide not to post them. According to CFPB officials, this policy is only applicable to comments that contain entirely identical comment content and identity information, and does not apply to mass mailing campaigns. Similarly, when DOL agencies receive duplicate comments as part of mass mail campaigns, the agency can choose to post a representative sample of the duplicate comment to Regulations.gov along with the tally of the duplicate or near-duplicate submissions, or post all comments as submitted. EPA guidance states that duplicate comments submitted as part of mass mailings are to be posted as a single primary document in Regulations.gov with a tally of the total number of duplicate comments received from that campaign. However, as discussed later in this report, EPA may post all duplicate comments it receives, depending on the format in which they are submitted. Comments with Potentially False Identity Information Five of the 10 selected agencies have documented internal guidance on how to manage posting comments that may have been submitted by someone falsely claiming to be the commenter. However, the procedures related to addressing comments with potentially false identity information also vary among agencies. For EBSA and WHD, guidance from DOL states that if a comment was submitted by someone falsely claiming to be the commenter, the identifying information is to be removed from the comment and the comment is treated as anonymous and remains posted. In cases where an individual claims that a comment was submitted to CFPB or SEC using the individual’s identity information without his or her consent, both agencies’ guidance provide staff with discretion to redact, reattribute, or otherwise anonymize the comment letter in question. According to internal guidance from CFPB, EPA, and SEC, if agency officials are able to confirm that a comment was submitted by someone falsely claiming to be the commenter, such as by the agency sending an email to the address associated with the comment, the comment may not be made available to the public. SEC officials stated that although they have discretion to remove the comment from public posting, the typical response is to encourage the individual making the claim to submit another comment correcting the record. Similarly, if a member of the public contacts EPA claiming that a comment was submitted using his or her identity information without consent and agency staff cannot confirm it, EPA guidance directs staff to ask the requester who submitted the claim to submit another comment to the docket explaining that the original comment was submitted without the individual’s consent. Both comments will be included in the docket. Identity-Related Guidance for Comment Analysis According to our analysis of the guidance the selected agencies provided, four of the 10 agencies have identity-related guidance for the comment analysis phase (see table 5). Identity-related guidance for the comment analysis phase includes criteria for coding comments for analysis, including by identifying the type of commenter (such as an individual or interest group). CMS guidance states that, during review, comments should be separated by issue area and tables may be used to assist in the grouping of comments and preparing briefing materials. While this guidance notes that these tables may be used to group commenters based on their identity during review, when summarizing comments later in the process the guidance indicates that CMS officials should avoid identifying commenters by name or organization. FDA training materials address how to prepare comment summaries to help ensure the agency has properly identified all comments regarding an issue. To conduct a quality-control check on the comment review process, FDA sorts the comments by commenter and reviews the comments from a sample of key stakeholders, including interested trade associations and consumer or patient groups, to confirm that relevant issues were identified. For EBSA and WHD, guidance from DOL recommends attaching the “organization name” to comments within a docket to improve transparency and help the agency and public users search for organizations within Regulations.gov. In addition, DOL guidance suggests flagging comments for additional review, including at least one flag based on identity. Identity-Related Guidance for Responding to Comments According to our analysis of the guidance the selected agencies provided, five of the 10 agencies have documented identity-related guidance for responding to comments. (See table 6.) Identity-related guidance for the response to comments phase includes guidance for agency officials on how, if at all, to address identity information related to comments in developing the final rule. As discussed previously, during comment analysis, CMS guidance indicates that officials should avoid identifying commenters by name or organization when summarizing comments. These summaries may then be used as a basis for the agency’s formal comment summary included in the preamble of the final rule. CFPB guidance states that a summary of the rulemaking process should be developed for the preamble of the final rule and include how many comments are received and from which type of commenter. CFPB is to describe both the commenters and comments in general terms rather than identify commenters by name or entity. For example, rather than naming a specific financial institution, CFPB may refer to “industry commenters” in the final rule. For EBSA and WHD, guidance from DOL states that when several commenters suggest the same approach to revising or modifying the proposed rule, the names of specific commenters can be cited as a list in a footnote. When choosing which commenter should appear first in the list, DOL agencies are to select the commenter with the strongest or most detailed discussion on the issue. However, it is not necessary to always identify commenters by name, and, according to DOL officials, the department’s general practice is not to do so. Instead, the agency may use phrases such as “several commenters,” or “comments by the ABC Corporation and others.” DOL agencies may also reference commenters by type rather than name, using terms including “municipal agency, state workforce agency, employer, academic representative, agency, and industry,” among others. FDA training materials recommend that the final rule include a very brief explanation of the number and scope of comments to the proposed rule, including who submitted them. Commenters are not identified as individuals, but rather by commenter type, such as trade associations, farms, or consumer advocacy organizations, among others. Selected Agencies’ Treatment of Identity Information Collected during the Public Comment Process Varies Within the discretion afforded by the APA, the 10 selected agencies’ treatment of identity information during the comment intake, comment analysis, and response to comments phases of the public comment process varies. Selected agencies differ in how they treat identity information during the comment intake phase, particularly in terms of how they post duplicate comments, which can lead to identity information being inconsistently presented to public users of comment systems. Selected agencies’ treatment of identity information during the comment analysis phase also varies. Specifically, program offices with responsibility for analyzing comments place varied importance on identity information during the analysis phase. All agencies draft a response to comments with their final rule, but the extent to which the agencies identify commenters or commenter types in their response also varies across the selected agencies. Selected Agencies Vary in Their Treatment of Identity Information during the Comment Intake Phase Within the discretion afforded by the APA and E-Government Act, selected agencies vary in how they treat identity information during the comment intake phase, which includes identifying duplicate comments and posting comments to the public website. Further, the way in which the selected agencies treat comments during the comment intake phase results in identity information being inconsistently presented on the public website. Generally, officials told us that their agencies either (1) maintain all comments within the comment system, or (2) maintain some duplicate comment records outside of the comment system, for instance, in email file archives. Specifically, officials from four selected agencies (CMS, FCC, FDA, and WHD) stated that they maintain all submitted comments in the comment system they use. Officials from the other six agencies (BLM, CFPB, EBSA, EPA, FWS, and SEC) stated that their agencies maintain some comment records associated with duplicate comments outside of the comment system. Among the four agencies that maintain all submitted comments within their comment system, our review of comment data showed that practices for posting duplicate comments led to some identity information or comment content being inconsistently presented on the public website. For example, according to CMS officials responsible for comment intake, CMS may post all duplicate comments individually, or post duplicate comments in batches. When duplicate comments are posted in batches, the comment title will include the name of the submitting organization followed by the total number of comments. However, as discussed previously, CMS does not have any documented policies or guidance associated with the comment intake process, and we identified examples where the practices described by CMS officials differed. On one CMS docket, for instance, staff entered more than 37,000 duplicate comments individually, with the commenter’s name and state identified in the comment title. However, the attached document included with each of the posted comments was an identical copy of one specific comment containing a single individual’s identity information. While all the individual names appear to have been retained in the comment titles, and the count of total comments is represented, any additional identity information and any potential modifications made to each duplicate comment submitted have not been retained either online or outside of FDMS, and are not presented on the public website. (See fig. 4.) Similarly, although our analysis of WHD comments did not suggest that any comments were missing from Regulations.gov, on one WHD docket almost 18,000 duplicate comments were associated with a single comment with one individual’s name identified in the comment title. While all of the comments are included within 10 separate attachments, none of the identity information included with these comments can be easily found without opening and searching all 10 attachments, most of which contain approximately 2,000 individual comments. (See fig. 5.) Our review of comment data showed that the selected agencies that maintain some comment records outside of the comment system (six of 10) also follow practices that can inconsistently present some identity information or comment content associated with duplicate comments. For BLM and FWS, agency officials responsible for comment intake stated that all comments received through Regulations.gov are posted, but a single example may be posted when duplicate paper comments are received. As discussed previously, neither BLM nor FWS have internal guidance or policy associated with comment intake. For CFPB, EBSA, EPA, and SEC, the agency may post a single example along with the total count of all duplicate comments, but does not necessarily post all duplicate comments online. Thus, identity information and unique comment contents for all duplicate comments may not be present on the public website. For example, on one CFPB comment, the agency posted an example of a submitted comment containing only the submitter’s illegible signature. None of the other associated identity information for the posted sample, or any of the duplicate comments, is included in the comment data. (See fig. 6.) Similarly, for all duplicate comments received, SEC posts a single example for each set of duplicate comments and indicates the total number of comments received. As a result, the identity information and any unique comment content beyond the first example are not present on the public website. (See fig. 7.) The Importance of Identity Information to Comment Analysis Varies On the basis of the results from our survey, program offices with responsibility for analyzing comments differ in the importance they place on identity information during the analysis phase. Because subject-matter experts are responsible for reviewing public comments and considering whether changes to the proposed rule should be made, program offices generally analyze comments. Officials from all but one of the 52 program offices we surveyed responded that they were responsible, in whole or in part, for analyzing public comments. In our survey of program offices with regulatory responsibilities in the 10 selected agencies, at least one program office in each agency reported that the identity or organizational affiliation of a commenter is at least slightly important to comment analysis. Additionally, five of the 10 selected agencies (CMS, EPA, FCC, FDA, and FWS) had at least one program office that reported that the identity or organizational affiliation of a commenter is not at all important to comment analysis. None of the 52 program offices we surveyed responded that the identity of an individual commenter is extremely important to their analysis, while only one program office responded that the commenter’s organizational affiliation is extremely important to its analysis. (See fig. 8.) According to officials we interviewed from eight of the 10 selected agencies, the substance of the comment is considered during analysis rather than the submitted identity information. Officials from six of these agencies emphasized that because the agency accepts anonymous comments, identity is not relevant to their analysis of comments. However, officials from four of the eight selected agencies stated that, in certain instances, identity information may be noted. In the case of FDA, officials explained that commenters are required to indicate a category to which they belong, such as “individual consumer” or “academia.” According to FDA officials, however, these categories were used to assist in writing the comment response, rather than informing the analysis. Officials from the Department of the Interior’s Office of the Solicitor (responsible for part of the comment process at BLM and FWS) stated that the agency may make particular note of comments submitted by a law firm, as these comments can help the agency understand the position of the law firm and to prepare a defense in the event that a lawsuit is filed. Similarly, officials from EPA stated that they are familiar with many commenters and their positions on certain issues, due to prior legal interactions. In another example of how an agency may consider the identity of a commenter, officials from FWS stated that when scientific data are provided in support of a comment, subject-matter experts will verify the data and their source. Selected Agencies Differ in How They Identify Commenters When Responding to Comments All selected agencies draft a response to comments with their final rule, but the extent to which the agencies identify commenters in their response varies. In our survey of program offices with regulatory responsibility, officials from 51 of 52 offices stated that they are responsible in whole or in part for responding to comments. Of those responsible, at least one program office from eight of the 10 agencies (28 of 52 offices) reported that they identified comments by commenter name, organization, or comment ID number in the response to comments for at least some rulemakings since 2013. In the case of WHD, officials we interviewed explained that when they discuss a specific comment in the preamble to the final rule, they provide the name of the organization that submitted the comment so that anyone interested in locating the response to the comment may do so easily. We found that EBSA and FCC also identified commenters by individual or organizational name in their response to comments, while EPA referred to comments by their comment ID number. For example, in a rule finalized in 2018, EPA referred to comment ID numbers in the response to comments: “Two comments: EPA-R06-RCRA-2017-0556-0003 and EPA- R06-RCRA-2017-0556-0005 were submitted in favor of the issuance of the petition.” EPA officials noted that there is variation within the agency in terms of how commenters are identified when the agency is responding to comments, and there may be some situations where the commenter is identified by name. Officials from all program offices within CFPB and BLM responded to the survey that they never identified comments by commenter name, organization, or comment ID in their responses to public comments. In its response to comments in a 2014 final rule, for example, CFPB stated that “industry commenters also emphasized the need to coordinate with the States,” without specifying the organization or specific comments. Similarly, in its response to comments document for a 2016 rule, for example, BLM responded directly to the themes and issues raised by comments while stating that the issue was raised by “one commenter” or “some commenters.” Selected Agencies’ Practices Associated with Posting Identity Information Are Not Clearly Communicated to Public Users of Comment Websites The 10 selected agencies have implemented varied ways of posting identity information during the comment intake process, particularly regarding posting duplicate comments, as allowed by the APA. Our analysis of Regulations.gov and agency-specific comment websites shows that these practices are not always documented or clearly communicated to public users of the websites. Public users are members of the public interested in participating in the rulemaking process via Regulations.gov or agency-specific websites. They may or may not submit a comment. In part to facilitate effective public participation in the rulemaking process, the E-Government Act requires that all public comments and other materials associated with the rulemaking docket should be made “publicly available online to the extent practicable.” There may be situations where it is not practicable to post all submitted items, for example when resource constraints prevent the scanning and uploading of thousands of duplicate paper comments. Because the content of such comments is still reflected in the administrative record, such practices are not prohibited by the APA or the E-Government Act. However, key practices for transparently reporting open government data state that federal government websites—like those used to facilitate the public comment process—should fully describe the data that are made available to the public, including by disclosing data sources and limitations. This helps public users make informed decisions about how to use the data provided. In the case of identity information submitted with public comments, for example, public users may want to analyze identity information to better understand the geographic location from which comments are being submitted, and would need information about the availability of address information to do so. The Administrative Conference of the United States has made several recommendations related to managing electronic rulemaking dockets. These include recommendations that agencies disclose to the public their policies regarding the treatment of materials submitted to rulemaking dockets, such as those associated with protecting sensitive information submitted by the public. As described earlier in this report, the varied practices that selected agencies use with regard to identity information during the public comment process results in the inconsistent presentation of this information on the public websites, particularly when it is associated with duplicate comments. Although the APA and E-Government Act do not include any requirements associated with the collection or disclosure of identity information, we found that the selected agencies we reviewed do not effectively communicate the limitations and inconsistencies in how they post identity information associated with public comments. As a result, public users of the comment websites lack information related to data availability and limitations that could affect their ability to use the comment data and effectively participate in the rulemaking process themselves. Selected Agencies’ Practices Associated with Posting Identity Information on Regulations.gov Vary and Are Not Clearly Communicated to Public Users Public users of Regulations.gov seeking to submit a comment are provided with a blanket disclosure statement related to how their identity information may be disclosed, and are generally directed to individual agency websites for additional detail about submitting comments. The Regulations.gov disclosure statements and additional agency-specific details are provided on the comment form, and a user seeking to review comments (rather than submit a comment) may not encounter them on Regulations.gov. Regulations.gov provides the following disclosure statement at the bottom of each comment submission form: Any information (e.g., personal or contact) you provide on this comment form or in an attachment may be publicly disclosed and searchable on the Internet and in a paper docket and will be provided to the Department or Agency issuing the notice. To view any additional information for submitting comments, such as anonymous or sensitive submissions, refer to the Privacy Notice and User Notice, the Federal Register notice on which you are commenting, and the Web site of the Department or Agency. Similar information is provided to all public users in the Privacy Notice, User Notice, and Privacy Impact Assessment for Regulations.gov and the eRulemaking Program. While all of these note that any information, personal or otherwise, submitted with comments may be publicly disclosed, public users are not provided any further detail on Regulations.gov regarding what information, including identity information, they should expect find in the comment data. We found that when Regulations.gov provides public users with additional agency-specific information about the comment intake process, including accepting and posting comments, it is typically provided in the context of the comment form and does not provide public users enough detail to determine what comment data will be available for use when searching comments that are already submitted. Specifically, each comment form contains a pop-up box under the heading “Alternate Ways to Comment,” which reflects the language associated with comment submission methods included in the NPRM on which individuals are seeking to comment. Additionally, three participating agencies in our review (EPA, FWS, and WHD) provide additional detail about posting practices on the comment form under the heading “Agency Posting Guidelines.” Both FWS and WHD indicate that the entire comment, including any identifying information, may be made available to the public. Although WHD follows DOL policy associated with posting duplicate comments, which allows some discretion in posting practices, according to a WHD official, without exception, all comments are posted to Regulations.gov. In our review of WHD comment data, we did not identify instances where this practice was not followed. The “Agency Posting Guidelines” provided by EPA inform public users that all versions of duplicate or near-duplicate comments as part of mass mail campaigns may not be posted; rather a representative sample will be provided, with a tally of the total number of duplicate comments received. (See fig. 9.) However, this information does not provide enough detail to help public users determine whether all of the individual comments and associated identity information are posted within this docket, because it indicates that samples are provided for duplicate comments, rather than all of the copies submitted. We found that one EPA docket received more than 350 separate sets of duplicate comments comprising a total of more than 4.3 million comments (as reported by Regulations.gov) but there is variation in how these comments were posted. Specifically, EPA inconsistently presented duplicate comments: 198 of the 350 duplicate comment sets in this docket were submitted via email. Of the duplicate comment sets submitted via email, 45 sets have all comments posted in Regulations.gov, while 153 sets have a sample of the comments posted. According to EPA officials, this inconsistency results from the format in which the comments were submitted. For example, when duplicate comments are compiled into a single document and submitted to EPA through one email, all of the comments will be posted, whereas duplicate comments that are emailed separately will be accounted for in the tally accompanying a sample comment. While the APA and the E-Government Act do not require comments to be posted in any particular way, EPA has established detailed internal guidance for the comment intake process for its Docket Center staff. This document is in draft form, but clearly lays out the processes EPA staff are expected to follow when duplicate comments are submitted in different ways, and what naming conventions will be used in different instances. However, EPA does not provide similar information to public users about the process it uses to determine whether all duplicate comments will be posted, making it challenging for public users to determine whether all comments are available on Regulations.gov. Participating Agency Websites The eRulemaking PMO provides participating agencies with flexibility in how they choose to use FDMS and Regulations.gov, with each department or agency responsible for managing its own data within the website. As a result, Regulations.gov directs public users to participating agencies’ websites for additional information about agency-specific review and posting policies. We found that all of the selected participating agencies provide additional information of some kind about the public comment process on their own websites. However, the provided information usually directs users back to Regulations.gov or to the Federal Register. Further, even when selected participating agencies include details on their website about the agency’s posting practices or treatment of identity information associated with public comments, it does not fully describe data limitations that public users need to make informed decisions about how to use the data provided. Specifically, seven of the eight participating agencies (BLM, CMS, CFPB, EPA, FWS, FDA, and WHD) direct public users back to Regulations.gov and the Federal Register, either on webpages that are about the public comment process in general, or on pages containing information about specific NPRMs. As discussed previously, however, the disclosure statement on Regulations.gov directs public users to the agency website for additional information. Although three of these participating agencies (EPA, FWS, and FDA) do provide public users with information beyond directing them back to Regulations.gov or the Federal Register, only FDA provides users with details about posting practices that are not also made available on Regulations.gov. EPA: The additional information provided on EPA’s website largely replicates the “Agency Posting Guidelines” provided on the Regulations.gov comment form, as shown in figure 9. As discussed previously, however, the way in which EPA posts duplicate comments varies, and the provided information does not include details about the process the agency uses to determine whether all duplicate comments will be posted. FWS: One NPRM-specific web page that we identified communicated to public users that all comments will be posted on Regulations.gov, including any personal information provided through the process. This largely replicates the “Agency Posting Guidelines” provided on the Regulations.gov comment form, as well as language included in the NPRM itself. However, according to an FWS official, when the agency receives hard-copy duplicate comments through the mail, only one sample of the duplicate is posted publicly on Regulations.gov. FWS does not have any policies related to this practice and the information FWS provides to public users does not include details about how the agency determines which comment to post as the sample. FDA: On its general website, FDA includes a webpage titled, “Posting of Comments.” On this page, FDA provides users with a detailed explanation about a policy change the agency made in 2015 related to the posting of public comments submitted to rulemaking proceedings. Specifically, prior to October 2015, FDA did not publicly post comments submitted by individuals in their individual capacity. See figure 10. After October 15, 2015, FDA’s policy is to publicly post all comments to Regulations.gov, to include any identifying information submitted with the comment. In our review of FDA comments submitted to dockets opened since October 15, 2015, we did not identify instances where this policy was not followed. The one participating agency in our scope (EBSA) that does not direct public users back to Regulations.gov instead recreates the entire rulemaking docket on its own website. On the main EBSA webpage related to regulations, public users can find links to various websites related to rulemaking, including a “Public Comments” page, but not Regulations.gov. From the “Public Comments” page, public users can access pages that are specific to NPRMs and other activities for which EBSA is requesting public comments. On the NPRM-specific webpages, the rulemaking docket that can be found on Regulations.gov is duplicated, including individual links to each submitted comment. Certain document links, such as those for the proposed rule or final rule, direct a public user to the Federal Register document, but the comment links do not direct users to Regulations.gov. While EBSA follows DOL guidance associated with posting duplicate comments, which allows some discretion in posting practices, EBSA does not have a policy for how comments are posted to Regulations.gov or its own website, and in the examples we reviewed the content of the docket pages does not always match. According to EBSA officials, the agency began this practice prior to the development of Regulations.gov, and has continued it because internal staff and other stakeholders find the webpages useful. However, we have previously reported that reducing or eliminating duplicative government activities can help agencies provide more efficient and effective services. Further, on EBSA’s “Public Comments” webpage, public users are informed that comments with inappropriate content will be removed, but no other information associated with EBSA’s posting practices is provided on this general page. In one instance on an NPRM-specific webpage, public users are informed that identity information has been removed from certain comments due to the inclusion of personal health information, but most of the NPRM-specific webpages we reviewed did not include this disclosure. Additionally, duplicate comments are posted on the NPRM- specific webpages under the heading “Petitions,” and are posted with a number following the title of the comment. While public users are informed that the number represents the total number of comments submitted, not all links include a copy of each individual comment. This practice aligns with DOL guidance, but as a result, the way in which EBSA posts duplicate comments varies even within dockets, and the provided information does not include details about the process the agency uses to determine whether all duplicate comments will be posted. Additionally, because EBSA recreates rulemaking dockets on its own website without referencing Regulations.gov or explaining the process, public users lack assurance about how EBSA’s data sources relate to one another. Because participating agencies are not required to adhere to standardized posting practices, Regulations.gov directs public users to participating agency websites for additional information about posting practices and potential data limitations. However, the additional information provided on the selected agencies’ websites is rarely different from what is provided on Regulations.gov. Further, it does not describe the limitations associated with the identity information contained in publicly posted comments, and in many cases simply directs users back to Regulations.gov. As allowed for under the APA, all of the participating agencies in our review vary in the way in which they post identity information associated with comments—particularly duplicate comments. However, the lack of accompanying disclosures may potentially lead users to assume, for example, that only one entity has weighed in on an issue when, actually, that comment represents 500 comments. The APA, E-Government Act and relevant Executive Orders establish the importance of public participation in the rulemaking process, to include access to electronic rulemaking dockets in formats that can be easily searched and downloaded. Further, key practices for transparently reporting open government data state that federal government websites— like those used to facilitate the public comment process—should fully describe the data that are made available to the public, including by disclosing data sources and limitations. Without better information about the posting process, the inconsistency in the way in which duplicate comments are presented to public users of Regulations.gov limits public users’ ability to explore and use the data and could lead users to draw inaccurate conclusions about the public comments that were submitted and how agencies considered them during the rulemaking process. Agency-Specific Comment Websites Do Not Clearly Communicate Posting Policies to Public Users Both SEC and FCC use comment systems other than Regulations.gov and follow standardized posting processes associated with public comments submitted to their respective comment systems, but SEC has not clearly communicated these practices to the public. Although it appears to users of the SEC website that the agency follows a consistent process for posting duplicate comments, this practice has not been documented or communicated to public users of its website. As discussed earlier, SEC posts a single example for each set of duplicate comments and indicates the total number of comments received. As a result, the identity information and any unique comment content beyond the first example are not accessible to the public online. According to SEC officials, this practice is not documented in formal policy, and is not explicitly communicated to public users of the SEC’s comment website. Although SEC does provide public users with some information on its “How to Submit Comments” page, this information is limited to informing public users that all comments will be posted publicly, without any edits to personal identifying information, and no other information related to SEC’s posting process is provided. Without clearly communicated policies for posting comments, public users of SEC.gov do not have information related to data sources and limitations needed to determine whether and how they can use the data associated with public comments. In contrast, FCC identifies its policies for posting comments and their associated identity information in a number of places on the FCC.gov website, and on the ECFS web page within the general website. Regarding comments submitted to rulemaking proceedings through ECFS, public users are informed that all information submitted with comments, including identity information, will be made public. According to FCC officials, all comments are posted directly to ECFS as they are submitted, without intervention by FCC staff. Further, according to officials, all duplicate comments remain in ECFS as individual comments, unless an organization submits a Standard filing with an attached file containing multiple comments. Our review of ECFS comment data did not identify discrepancies with this practice. Conclusions While the public comment process allows interested parties to state their views about prospective rules, the lack of communication with the public about the way in which agencies treat identity information during the posting process, particularly for duplicate comments, may inhibit users’ meaningful participation in the rulemaking process. While the APA does not include requirements for commenters to provide identity information, or for agency officials to include commenter identity as part of their consideration of comments, key practices for transparently reporting open government data state that federal government websites—like those used to facilitate the public comment process—should fully describe the publicly available data, to include disclosing data sources and limitations. Without clearly communicating how comments and their associated identity information are presented in the data, public users could draw inaccurate conclusions about public comments during the rulemaking process, limiting their ability to participate in the rulemaking process. Five selected agencies do not have a policy for posting comments, and the selected agencies generally do not clearly communicate to public users about the way in which they publicly post comments and their associated identity information. In addition, one agency fully duplicates rulemaking dockets on its own website, without informing users that the information may be found in a searchable database on Regulations.gov. Regulations.gov does not provide detailed information about posting policies, and seven of the eight participating agencies in the scope of our review direct public users back to Regulations.gov or the Federal Register on their own websites. Further, the available information is provided on the comment form, so public users seeking to review comment data that had been previously submitted may not encounter it. Because all of the participating agencies in our review vary in the way in which they post identity information associated with comments—particularly duplicate comments—the lack of accompanying disclosures may potentially lead users to reach inaccurate conclusions about who submitted a particular comment, or how many individuals weighed in on an issue. As a result, public users of Regulations.gov do not have information related to data sources and limitations that could affect their ability to effectively use the comment data and, consequently, participate in the rulemaking process. Similarly, users of SEC.gov do not have information related to data sources and limitations needed to determine whether and how they can use the data associated with public comments, because the agency lacks a policy for posting duplicate comments and associated identity information to the public. In short, more clearly communicated information about posting policies, particularly with regard to identity information and duplicate comments, could help public users make informed decisions about how to use the comment data these agencies provide, and how comments may have informed the rulemaking process. Recommendations for Executive Action We are making the following eight recommendations to the Directors of BLM, CFPB, and FWS; the Administrators of CMS, EPA, and WHD; the Assistant Secretary of Labor for EBSA; and the Chairman of the SEC, respectively: The Director of BLM should create and implement a policy for standard posting requirements regarding comments and their identity information, particularly for duplicate comments, and should clearly communicate this policy to the public on the BLM website. (Recommendation 1) The Administrator of CMS should create and implement a policy for standard posting requirements regarding comments and their identity information, particularly for duplicate comments, and should clearly communicate this policy to the public on the CMS website. (Recommendation 2) The Director of CFPB should finalize its draft policy for posting comments and their identity information, particularly for duplicate comments, and clearly communicate it to the public on the CFPB website. (Recommendation 3) The Assistant Secretary of Labor for EBSA should 1. create and implement a policy for standard posting requirements regarding comments and their identity information, particularly for duplicate comments; 2. clearly communicate this policy to the public on the EBSA website; 3. evaluate the duplicative practice of replicating rulemaking dockets on the EBSA website, to either discontinue the practice or include a reference to Regulations.gov and explanation of how the pages relate to one another. (Recommendation 4) The Administrator of EPA should finalize its draft policy for posting comments and their identity information, particularly for duplicate comments, and clearly communicate it to the public on the EPA website. (Recommendation 5) The Director of FWS should create and implement a policy for standard posting requirements regarding comments and their identity information, particularly for duplicate comments, and should clearly communicate this policy to the public on the FWS website. (Recommendation 6) The Chairman of the SEC should develop a policy for posting duplicate comments and associated identity information and clearly communicate it to the public on the SEC website. (Recommendation 7) The Administrator of WHD should clearly communicate its policy for posting comments and their identity information, particularly for duplicate comments, to the public on the WHD website. (Recommendation 8) Agency Comments and Our Evaluation We provided drafts of this product for comment to CFPB, EPA, FCC, SEC, the Department of Health and Human Services, the Department of the Interior, and DOL. We received written comments from three of the selected agencies and the three Departments which are reproduced in appendixes V through X. All of the selected agencies generally agreed with the recommendations directed to them and indicated that they intended to take action to more clearly communicate their posting policies to the public. BLM, EBSA, FWS, and SEC also stated that they intend to develop written policies associated with posting comments. In its written comments, the Department of Health and Human Services stated that CMS already has policies for standard posting requirements. However, CMS could not provide us with this policy during the course of our review, and in the accompanying technical comments, officials stated that guidance associated with posting comments has not been formalized in a written document. Given that we found significant variation in the way that CMS posts comments, even within a single docket, we continue to believe that it is important for CMS to develop and implement a standard policy for posting comments and their identity information, in addition to communicating this policy to the public on the CMS website. CFPB and EPA also stated that they intend to finalize their draft policies for posting comments and their associated identity information. In addition, EPA included technical comments in its letter, which we considered and incorporated in this report as appropriate. FCC had no comments on the draft report, but provided technical comments, which we incorporated as appropriate. The remaining selected agencies and departments also provided technical comments, which we considered and incorporated in this report as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Director of CFPB; the Administrator of EPA; the Chairmen of FCC and SEC; and the Secretaries of Health and Human Services, the Interior, and Labor. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XI. Appendix I: Survey of Program Offices with Regulatory Responsibilities within Selected Agencies To determine how selected agencies treat identity information associated with public comments, in October 2018 we surveyed and received responses from 52 program offices within the selected agencies about their practices associated with comment intake (including identifying duplicate comments and posting comments to the public website), comment analysis (including reviewing comments and considering their content), and response to comments. To select the program offices to receive survey questionnaires about the public comment process, we first reviewed agency websites to identify all of the program offices in each of the selected agencies. We then identified program offices with regulatory responsibilities described by the websites and that had issued at least one Notice of Proposed Rulemaking (NPRM) from 2013 through 2017, and provided these lists to the selected agencies for confirmation. Table 7 lists the program offices we surveyed. Survey Development We developed a draft survey questionnaire in conjunction with another GAO engagement team conducting work on the public comment process, and pretested it with program office officials from four of the selected agencies in August and September 2018. We interviewed these officials to improve the questionnaire and ensure that (1) the questions were clear and unbiased, (2) the information could be feasibly obtained by program office officials, (3) the response options were appropriate and reasonable, and (4) the survey did not create an undue burden on program office officials. The process of developing the survey was iterative, where we used the results of one pretest to modify the questionnaire for the next pretest. Survey Administration and Review We distributed the questionnaires to the program offices as fillable Portable Document Format (PDF) forms, in October 2018 requesting that officials collaborate with others in their office to ensure the responses were reflective of the program office as a whole, rather than one individual’s experience. Two agencies, CMS and SEC, have agency-level administrative offices with centralized responsibilities for certain aspects of the public comment process. For these agencies, the selected program offices were instructed to leave certain questions blank, and we provided separate questionnaires for the administrative offices. All 52 program offices completed the survey, but the results cannot be generalized to program offices outside of the selected agencies. In developing, administering, and analyzing this survey, we took steps to minimize the potential errors that may result from the practical difficulties of conducting any survey. Because we surveyed and received responses from all program offices with regulatory responsibilities in the selected agencies, our results are not subject to sampling or nonresponse error. We pretested and reviewed our questionnaire to minimize measurement error that can arise from differences in how questions are interpreted and the sources of information available to respondents. We also answered questions from program offices during the survey, reviewed completed questionnaires, and conducted follow-up as necessary. On the basis of this follow-up and with agreement from the responding officials, we edited responses as needed. For CMS and SEC, we edited the blank questions in the program office questionnaires with responses from their administrative offices. Relevant Survey Questions Appendix II: Regulations.gov Comment Form Example Comments are submitted to Regulations.gov via an electronic comment form. See figure 11 for an example of a comment form from Regulations.gov. Appendix III: Electronic Comment Filing System Comment Forms The Federal Communications Commission’s (FCC) Electronic Comment Filing System (ECFS) allows commenters to submit comments to rulemaking proceedings via a Standard filing and Express filing. A Standard filing allows commenters to attach a file to their comment. See figure 12 for an example of a Standard filing. An Express filing does not allow for files to be attached. See figure 13 for an example of an Express filing. Appendix IV: Securities and Exchange Commission Comment Form Example One way in which comments are submitted to the Securities and Exchange Commission (SEC) is through an electronic comment form. See figure 14 for an example of a comment form from SEC.gov. Appendix V: Agency Comments from the Bureau of Consumer Financial Protection Appendix VI: Agency Comments from the Environmental Protection Agency Appendix VII: Agency Comments from the Department of Health and Human Services Appendix VIII: Agency Comments from the Department of the Interior Appendix IX: Agency Comments from the Department of Labor Appendix X: Agency Comments from the Securities and Exchange Commission Appendix XI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, David Bruno (Assistant Director), Elizabeth Kowalewski (Analyst in Charge), Enyinnaya David Aja, Gretel Clarke, Lauren Kirkpatrick, James Murphy, Alexandria Palmer, Carl Ramirez, Shana Wallace, and April Yeaney made key contributions to this report. Other contributors include Tim Bober, Dahlia Darwiche, Colin Fallon, Justin Fisher, James Healy, Katie LeFevre, Barbara Lewis, and Maria McMullen.
Why GAO Did This Study Federal agencies publish on average 3,700 proposed rules yearly and are generally required to provide interested persons (commenters) an opportunity to comment on these rules. In recent years, some high-profile rulemakings have received extremely large numbers of comments, raising questions about how agencies manage the identity information associated with comments. While the APA does not require the disclosure of identifying information from a commenter, agencies may choose to collect this information. This report examines (1) the identity information collected by Regulations.gov and agency-specific comment websites; (2) the guidance agencies have related to the identity of commenters; (3) how selected agencies treat identity information; and (4) the extent to which selected agencies clearly communicate their practices associated with identity information. GAO selected a nongeneralizable sample of 10 federal agencies on the basis of large comment volume. GAO surveyed 52 program offices within these agencies about their comment process; and reviewed comment websites, agency guidance, and posted comment data. GAO also interviewed relevant agency officials. What GAO Found The Administrative Procedure Act (APA) governs the process by which many federal agencies develop and issue regulations, which includes the public comment process (see figure below). Regulations.gov and agency-specific comment websites collect some identity information—such as name, email, or address—from commenters who choose to provide it during the public comment process. The APA does not require commenters to disclose identity information when submitting comments. In addition, agencies have no obligation under the APA to verify the identity of such parties during the rulemaking process. GAO found that seven of 10 selected agencies have some internal guidance associated with the identity of commenters, but the substance varies, reflecting the differences among the agencies. The guidance most frequently relates to the comment intake or response to comment phases of the public comment process. With the discretion afforded by the APA, selected agencies' treatment of commenters' identity information varies, particularly when posting duplicate comments (identical or near-identical comment text but varied identity information). Generally, officials told GAO that their agencies (1) post all comments within the comment system; or (2) maintain some comments outside of the system, such as in email file archives. For instance, one agency posts a single example of duplicate comments and indicates the total number of comments received. However, within these broad categories, posting practices vary considerably—even within the same agency—and identity information is inconsistently presented on public websites. Selected agencies do not clearly communicate their practices for how comments and identity information are posted. GAO's key practices for transparently reporting government data state that federal government websites should disclose data sources and limitations to help public users make informed decisions about how to use the data. As a result, public users of the comment websites could reach inaccurate conclusions about who submitted a particular comment, or how many individuals commented on an issue. What GAO Recommends GAO is making a total of eight recommendations to the selected agencies to more clearly communicate to the public their policies for posting comments and associated identity information to Regulations.gov and agency-specific comment websites. The selected agencies generally agreed with the recommendations.
gao_GAO-19-269
gao_GAO-19-269_0
Background Tax-Time Financial Products Table 1 provides an overview of tax-time financial products based on information gathered during our review. Participants in the Tax- Time Financial Products Industry The tax-time financial products industry consists of four main groups of participants: banks, paid providers of tax preparation services, settlement service providers, and software developers. Providers of tax preparation services include paid tax return preparers or electronic return originators (ERO). Not all tax preparers are EROs, but because IRS generally requires returns to be filed electronically for tax preparers filing more than 10 returns, tax preparers generally work with or for an ERO that also may be a tax preparer. Paid preparers and EROs offer their services in-person, on the Internet, or through software sold to taxpayers. They generally offer different refund disbursement options to taxpayers and may partner with banks to offer tax-time financial products. Software developers provide software needed to file tax returns electronically and offer tax-time financial products through their software to taxpayers. The largest tax preparation companies have their own software that allows them to prepare returns as well as offer tax-time financial products. Applications for the products generally can be completed through the same software used to file the return. Banks provide tax-time financial products. They also may approve and process product applications and perform settlement services (discussed below). Settlement service providers serve as intermediaries in transactions to deliver tax-time products. They work with banks to accept and process applications for tax products; allocate payments due to paid preparers, other providers, banks, and taxpayers; and provide distribution instructions to banks. Some banks have affiliates that perform settlement services, and some banks perform these functions themselves. Figure 1 illustrates the roles of these groups, using the example of a refund transfer transaction. Regulators Federal Banking Regulators The purpose of federal banking supervision is to help ensure that banks throughout the financial system operate in a safe and sound manner and comply with banking laws and regulations in the provision of financial services. At the federal level, banks are supervised by one of the following three prudential regulators and CFPB: The Federal Reserve supervises state-chartered banks that opt to be members of the Federal Reserve System, bank holding companies and savings and loan holding companies (and the nondepository institution subsidiaries of those organizations), and nonbank financial companies designated for Federal Reserve supervision by the Financial Stability Oversight Council. FDIC supervises all FDIC-insured state-chartered banks that are not members of the Federal Reserve System as well as state savings associations and insures the deposits of all banks and thrifts approved for federal deposit insurance. OCC supervises federally chartered national banks, federal savings associations (federal thrifts), and federally chartered branches and agencies of foreign banks. CFPB has rulemaking authority to implement provisions of federal consumer financial law and enforces various federal laws and regulations governing consumer financial protection. CFPB also examines banks with more than $10 billion in assets and their affiliates and certain nonbanks for compliance with federal consumer financial laws, accepts consumer complaints on topics such as debt collection and other consumer financial products or services, and educates consumers about their rights under federal consumer financial laws. FDIC, the Federal Reserve, and OCC are required to conduct a full- scope, on-site risk-management examination of each of their supervised banks at least once during each 12-month period. The regulators may extend the examination interval to 18 months, generally for banks and thrifts that have less than $3 billion in total assets and that meet certain conditions (for example, if they have satisfactory ratings, are well capitalized, and are not subject to a formal enforcement action). The prudential regulators generally conduct consumer compliance examinations every 12–36 months and Community Reinvestment Act examinations every 12–72 months. The specific timing depends on a bank’s size and its previous consumer compliance and Community Reinvestment Act rating. But the Dodd-Frank Wall Street Reform and Consumer Protection Act transferred consumer protection oversight and other authorities over certain consumer financial protection laws from multiple federal regulators to CFPB. Additionally, for the transferred laws such as Truth in Lending Act (TILA) and Equal Credit Opportunity Act, CFPB has examination and primary enforcement authority for banks with assets of more than $10 billion and any affiliates of such institutions. The three prudential regulators also are responsible for supervising for compliance with federal consumer financial laws for insured depository institutions with total assets of $10 billion or less. For example, they examine depository institutions for compliance with consumer financial laws including the Fair Housing Act, the Servicemembers Civil Relief Act, and Section 5 of the Federal Trade Commission Act. FTC can enforce Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive acts or practices affecting commerce, and TILA, which seeks to promote the informed use of consumer credit. TILA requires disclosures about the terms and cost of credit and standardizes the manner in which costs associated with borrowing are calculated and disclosed. FTC can enforce a number of additional statutes against certain entities; they include portions of the Gramm-Leach-Bliley Act, which requires financial institutions, including those providing tax-time financial products, to protect consumer data; the Telemarketing and Consumer Fraud and Abuse Prevention Act, which prohibits telemarketers from making misrepresentations in the sale of goods or services, which could include tax-time financial products; and the Military Lending Act, which provides important protections for servicemembers and their dependents seeking and obtaining certain types of consumer credit, including refund anticipation loans. The Office of Professional Responsibility within IRS is responsible for ensuring all tax practitioners (defined as certified public accountants, attorneys, enrolled agents, enrolled actuaries, appraisers, and enrolled retirement plan agents) and other individuals authorized to practice before IRS adhere to regulations relating to Circular 230, which governs practice before IRS. According to IRS, IRS is neither involved in offering, nor responsible for, tax-time financial products. Nonetheless, IRS stated that it addresses these types of products on its website because it is important for taxpayers to understand the terms of the loan products, which constitute an agreement between them and the third-party lender. Although IRS is not statutorily required to collect data on tax-time products, according to IRS officials, the agency retains information on use of the products. Specifically, IRS compiles information from tax returns that indicates whether the taxpayer also applied for a financial product. IRS also issues guidance to EROs on reporting these data through its Handbook for Authorized IRS e-File Providers of Individual Income Tax Returns (Pub. 1345). IRS makes the usage data publicly available on its website, and provides it on a biweekly basis to industry participants that are members of an IRS working group on security issues. In addition to researchers and consumer advocacy groups, federal entities also use these data, including the National Taxpayer Advocate, who leads IRS’s Taxpayer Advocate Service—an independent office in IRS whose objectives include mitigating systemic problems that affect large groups of taxpayers. As industry data on product use are generally limited, agencies and researchers rely on IRS for this information. Tax Credits and Protecting Americans from Tax Hikes Act of 2015 Refundable tax credits include the Earned Income Tax Credit (EITC) and the Additional Child Tax Credit (ACTC). The credits are termed refundable because, in addition to offsetting tax liability, any excess credit over the tax liability is refunded to the taxpayer. EITC provides tax benefits to eligible workers earning relatively low wages. For tax year 2018, the maximum EITC amount available was $6,431 for taxpayers filing jointly with three or more qualifying children, and $519 for individuals without children. In 2017, EITC provided more than $65 billion to about 27 million taxpayers. ACTC is the refundable portion of the Child Tax Credit and provides tax relief to low-income families with children. The Protecting Americans from Tax Hikes Act of 2015 (PATH Act) made several changes to the tax law. One of its provisions stipulates that funds owed taxpayers claiming EITC or ACTC refunds for a tax year cannot be released before February 15 to allow IRS time to review these returns for potential fraudulent activity. This change became effective on January 1, 2017. For the 2018 tax filing season (January through April 2018), refunds for taxpayers who claimed these tax credits were not available in bank accounts or prepaid cards until February 27, 2018. IRS Data on Use of Tax-Time Financial Products Have Some Limitations, but When Combined with Other Available Data Suggest Product Offerings Have Evolved IRS Data for 2016–2018 Do Not Accurately Reflect Product Use and IRS Has Not Updated Reporting Guidance to Tax Preparers IRS data on tax-time financial products for 2016–2018 do not accurately reflect product use and IRS has not updated reporting guidance to tax preparers. IRS data for 2008–2016 and information from industry participants and a consumer advocacy group’s reports suggest that trends in the market for tax-time financial products include the decline of refund anticipation loans and that refund transfers became the most used product. Industry data also indicate that product fees for refund transfers increased in 2018; multiple other fees can be associated with tax-time products. New tax-time products and product features continue to be introduced. Data collected by IRS are the primary source of information on the use of tax-time financial products and are used by federal entities, policymakers, regulators, researchers, and consumer groups. However, we identified some limitations in the IRS data related to use of refund anticipation loans, refund advances, and refund transfers. Tax-Time Financial Products Have Evolved Since 2012 Despite limitations with IRS data on product use by tax year, our analysis of multiyear trends from these data, supplemented with data collected by the National Consumer Law Center and from Securities and Exchange Commission filings, suggests that use of refund anticipation loans declined, the refund advance was introduced while refund transfers have become the most used tax-time product. Refund Anticipation Loans Applications for refund anticipation loans declined sharply from 2010 to 2012, according to IRS data and consumer groups reports. According to a 2010 study, the volume of refund anticipation loans peaked in 2002 with 12.7 million taxpayers. Volume began to decline at a faster rate between 2010 and 2011. According to a report by the National Consumer Law Center and the Consumer Federation of America, banks stopped offering the products in 2012 after the loans came under the scrutiny of federal banking regulators. IRS data continued to show use of refund anticipation loans after 2012 but with banks out of the market for refund anticipation loans, it is unclear what types of financial institutions were offering the loans. Consumer advocates with whom we spoke agree that nonbank lenders such as payday lenders likely offered the loans; however, we were not able to identify any. The consumer advocates, researchers, and industry participants with whom we spoke also were not able to provide us with any current information about these lenders. The IRS Taxpayer Advocate Office, the Financial Crimes Enforcement Network, and consumer advocates have long raised concerns about refund anticipation loans. For example, in 2007 the National Taxpayer Advocate expressed concerns about how the loans were offered to consumers and whether consumers adequately understood the product. Consumer advocates questioned the high interest rates the loans could carry, how loan fees reduced EITC benefits taxpayers received, and the ramifications of borrower default. In a 2008 advance notice of proposed rulemaking, IRS and the Department of the Treasury also shared concerns that refund anticipation loans offered tax preparers an incentive to fraudulently inflate refund claims and to market the loans to taxpayers who might not understand the full cost of the product. Banking regulators raised concerns as well. OCC and FDIC noted consumer protection and safety and soundness risks to banks that offered refund anticipation loans. FDIC encouraged consumers to have tax refunds directly deposited into their own bank accounts and raised concerns about other options that claimed to speed up a refund for a sizable cost, according to FDIC officials. The Office of Thrift Supervision, which had supervisory authority over federal thrifts at the time, ordered a medium-sized thrift to cease making refund anticipation loans in 2010. In part due to concerns expressed by OCC, national banks stopped offering the loans by 2010 and FDIC-supervised banks stopped offering them by 2012. An IRS decision also contributed to FDIC enforcement actions on refund anticipation loans. Before 2011, IRS used a tool called the debt indicator that acknowledged whether any of a taxpayer’s refund could be used to pay certain outstanding debts. IRS provided the debt indicator to tax preparers at the time the taxpayer’s return was filed electronically. Banks used the debt indicator in their underwriting tools to help determine a borrower’s likelihood of loan repayment. FDIC determined that without the debt indicator, a bank would have to develop and adopt a more robust underwriting process to make these loans in a safe and sound manner. According to FDIC, IRS’s elimination of the debt indicator created a safety and soundness concern because it removed a key data element used for determining a borrower’s ability to repay. Losing this information increased the risk of loss for lenders and at that time helped inform FDIC’s consent orders with two banks under its supervision to stop offering refund anticipation loans. In 2011 (the first tax season without the debt indicator), the number of returns with a refund anticipation loan indicator reported by IRS decreased to 1.17 million from 6.9 million in the prior year. IRS data continue to show use of refund anticipation loans after 2012, albeit at a much lower volume. For example, in 2016, IRS data show about 468,500 returns with a refund anticipation loan indicator and in 2017 the number appeared to spike to about 1.7 million. However, as discussed earlier, the data for these two years may be misleading because they likely conflate refund anticipation loans with refund advances. In 2018, IRS created a separate reporting category for refund advances and the 2018 data show about 356,000 returns with a refund anticipation loan indicator as of October 2018. Refund Transfers Use of refund transfers—which allow for direct deposit of refund checks through temporary accounts that banks open for taxpayers—far exceeded use of refund anticipation loans and refund advances since 2008, according to IRS data. The number of taxpayers who used a refund transfer more than doubled from 2008 through October 2018 to exceed 21 million. As banks stopped offering refund anticipation loans in 2012, refund transfers (also known as refund anticipation checks) began to increase. Unlike other tax-time financial products generally only available early in the tax season (which generally runs through mid-April), refund transfers are usually available after April. However, IRS data on refund transfers since 2016 have limitations. Although a refund transfer is not required to get a refund advance, a number of industry experts told us that almost all taxpayers who apply for a refund advance also apply for a refund transfer. But because tax preparers could select only one product indicator when reporting use of tax-time financial products, they could report a refund advance or a refund transfer, but not both. As discussed previously, IRS made changes in 2018 to allow preparers to add information about other product use but has not issued explanatory material about the changes. Refund Advances In 2016, a few banks began offering refund advances to taxpayers. Refund advances are no-fee, nonrecourse loans. It is difficult to determine usage trends for this product, although available data indicate an increase in use from 2016 to 2017. First, accurate IRS data on refund advances are not available for 2016 and 2017 because IRS did not provide an option for tax preparers to report refund advance products. As previously discussed, IRS added a separate reporting category for refund advances in 2018. As of October 17, 2018, IRS data show about 1.65 million returns with a refund advance indicator. Second, publicly available data from industry and other sources (consumer advocacy and research organizations) are limited. According to data reported by the National Consumer Law Center, major tax preparation companies facilitated the sale of about 365,000 refund advances in 2016. According to industry sources, use increased to about 1.63 million in 2017, when one of the largest tax preparation companies began offering refund advances. Industry data for 2018 were not yet publicly available at the time of this report. Third, taxpayers often obtain refund advances and refund transfers in tandem. But as discussed previously, IRS reporting indicators did not include an option for reporting use of multiple products until 2018. Use of refund advances also may have increased in 2017 because tax preparers increased the size of the advances. One lender that offers refund advances to tax preparers told us that the driving factor in demand for refund advances was the available loan amount. The maximum advance amount that tax preparers offered taxpayers in 2016 was $750. In 2017, the maximum increased to $1,300. Most industry participants and consumer groups told us that they believe that provisions of the PATH Act requiring IRS to delay issuance of EITC or ACTC returns and associated refunds until after February 15 led to an increase in demand for refund advances. They said that the delay puts pressure on taxpayers eligible for EITC or ACTC who depend on getting their refund early in the tax season (a refund advance can help mitigate the impact of this delay). Others stated that an increase in demand due to the PATH Act is possible, but the correlation between the two cannot be determined. One industry provider suggested that increased demand for refund advances also could be the result of marketing by tax preparation companies. Limited Public Data Suggest Refund Transfer Fees Generally Increased in 2018 Our analysis of publicly available data about product fees for refund transfers showed that fees increased in 2018. In particular, our analysis of fee data collected by the National Consumer Law Center shows that in 2014–2017 refund transfer fees charged by paid tax preparers remained generally unchanged at between $32.95 and $34.95. According to fee information we were given during our undercover visits, paid tax preparers generally charged their customers $39.95 or $49.95 during the 2018 tax filing season for a refund transfer that sometimes included both federal and state tax refunds. In one case the fee was $65, which included a paper check disbursement. Also in 2018, we found that online providers of tax filing services and software charged online filers who prepared their own returns between $12 and $39.99 for a refund transfer. According to our analysis, factors that can affect the fee a taxpayer pays for a refund transfer include the following: Filing method. Our review of providers’ websites shows that taxpayers who filed their own returns online using preparer software paid an average fee of $31.13 in 2018, which was lower than the $39.95 or $49.95 that paid preparers charged their customers. Disbursement method. The manner in which the taxpayer chooses to receive a tax refund may affect the fee. For example, our review of industry literature indicates that one bank set the fee at $29.95 if the refund was disbursed to a prepaid card offered by an affiliate vendor or at $39.95 if the refund was directly deposited or disbursed as a check. Another bank gave tax preparers the option to offer a free refund transfer for disbursement onto a prepaid card, $15 for a direct deposit, or $20 for a paper check. Incentives offered to tax preparers by banks. Incentives from banks for tax preparers can increase fees for taxpayers. Our review of banks’ promotional materials for tax preparers also indicates that some bank providers offer tax preparers different fee structures for a product—that is, the preparers can charge a higher fee to earn a rebate. For example, one bank offered a tax preparer the option to provide a refund transfer to clients for $39 (which includes an $8 incentive paid to the tax preparer) or for $29 (no incentive payment). On their websites, two banks marketed the no-incentive option to tax preparers as a way to be competitive (by offering low-cost options to their customers). Using a refund advance. According to a report by the National Consumer Law Center, one bank set a higher fee for a refund transfer if taxpayers also applied for a refund advance. When taxpayers used only a refund transfer, the fee was $29.95 for the federal refund and an additional $9.95 for the state refund, for a total of $39.90. If the taxpayer also applied for a refund advance (a no-fee product), the refund transfer fee was $44.95. Thus, taxpayers paid $5.05 more for a refund transfer if they also received a refund advance. Our analysis found that, in addition to the product fee, taxpayers may be charged other fees when they use a refund transfer. State refund transfer. In some cases, the refund transfer fee covered the deposit of a federal and a state refund. In other cases, the fee only covered the federal refund. In these cases, if the taxpayer received a state refund, the tax preparer charged an additional fee of $10 or $12. Disbursement services. According to documentation we reviewed, a tax preparer may charge an additional fee of $25 if taxpayers choose to get their refund as a paper check or $7 for a cash transfer to a third party. Prepaid card use. The long-term use of prepaid cards used to disburse a refund may add to the overall cost of getting a tax product. We reviewed cardholder agreements and fee schedules for several prepaid cards commonly used to disburse funds from a tax refund and found they generally carry monthly fees of about $5. The issuer of the prepaid cards also may charge consumers a fee every time they access cash at automated teller machines, deposit more money onto the card, or do not use the card for a certain period of time. Software fees. Companies that design tax preparation software may charge a fee or fees associated with the tax product. Taxpayers may pay one or more of these fees when they use a refund transfer to receive their tax refund. The bank deducts these fees from the taxpayer’s refund after receiving funds from IRS or the state taxing authority. The fee categories are technology fee (up to $18 in our review), a transmission fee that may be a fixed amount (such as $2) or a variable amount, and a processing fee of $6. Comparative Fee Scenarios To determine how the fees associated with a refund transfer can affect the total tax preparation fees a provider may charge a taxpayer, we reviewed fee data we collected. We then identified the types and totals of fees generally associated with tax products and created four possible scenarios based on this analysis (see fig. 2). We designed two scenarios with online self-filers (taxpayer uses a refund transfer and taxpayer does not use a refund transfer) and two scenarios with paid preparers performing the filing (taxpayer uses a refund transfer and taxpayer does not use a refund transfer). Tax-Time Financial Products Have Continued to Evolve Since 2016 Recent and emerging developments in the market for tax-time financial products include higher loan amounts and new products, according to our analysis of selected tax preparers’ websites and marketing materials, and information we were given during our undercover visits. For example, in 2018 refund advances became available to online filers. They previously were offered only to taxpayers who obtained paid tax preparation services in person (at a “storefront”). The maximum amount for a refund advance has continued to increase. In 2016, the maximum loan amount available to a taxpayer was $750. In 2018, the maximum loan amount available was $3,250 and for 2019, one preparer has offered an advance of up to $3,500. One industry participant told us that the industry in general is in a race to increase borrowing limits to remain competitive and attract more customers. In 2018, banks offered a new product that combines the features of a refund anticipation loan and a refund advance. The product allows the taxpayer to apply for a refund advance (up to a fixed amount) with no fee or finance charges, the option to apply for an additional loan with a fee (similar to a refund anticipation loan), or a combination of the two products known as a hybrid. For 2018, two banks offered this additional loan (not to exceed $1,000) at an annual percentage rate of 29.9 percent. For 2019, one bank offered taxpayers the option of a no-fee advance of up to $1,000, or an interest-bearing loan of $2,000, $3,000, or $5,000 based on the expected refund. The interest-bearing loans would carry an annual percentage rate of 26.07 percent in addition to a fee of $30–$75, depending on the loan amount. Also for 2019, one national tax preparation company has offered the option of a no-fee advance of up to $3,500 or a fee-based advance of up to $7,000, which would carry an annual percentage rate of 35.9 percent. In addition, demand for refund transfers has increased among online self- filers. As more people file their own tax returns by using web-based software, the number of refund transfers used by self-filers may continue to increase. Because few tax preparers offer refund advances to online self-filers, taxpayers are still more likely to get a refund advance from a paid tax preparer. Finally, issues relating to the applicability of TILA disclosure requirements to refund transfers could affect the market for tax-time products. According to representatives of two consumer advocacy organizations, deferment of tax preparation fees until the refund is received constitutes an extension of credit; therefore, refund transfers should be treated as loan products. Tax preparers and a policy research and education organization with whom we met do not believe that refund transfer fees meet the definition of a loan. Should regulators decide that a refund transfer constitutes an extension of credit, and would therefore be a credit transaction with a finance charge, refund transfers would become subject to provisions of TILA. These changes could affect taxpayers’ access to this product as well as product pricing. According to Securities and Exchange Commission filings of some tax preparers, if refund transfers were successfully characterized as such, the additional requirements and costs could limit their ability to offer these products to clients. Refund advances were promoted by providers as a fee-free, interest-free credit product, and thus TILA disclosure requirements are generally not considered applicable for them. However, new interest-bearing credit products announced for 2019 may be subject to consumer protection regulations. Lower-Income and Some Minority Taxpayers Were More Likely to Use Tax- Time Financial Products for Various Reasons Our Analysis Found That Lower-Income, African- American, and Single Taxpayers Were More Likely to Use Tax-Time Financial Products Using FDIC data, we conducted a multivariate regression analysis to examine the relationship between economic and demographic variables and tax-time financial product use. This approach allowed us to test the significance of the relationships between each variable and the likelihood of using tax-time financial products, while controlling for other factors. Income-Related Characteristics Lower-income households were more likely to use tax-time financial products than higher-income households, particularly when they used paid tax preparers to file their taxes, according to our analysis of 2017 FDIC data. More specifically, we estimated that households with incomes between $20,000 and $39,999 were more likely to use tax-time financial products to receive their tax refunds more quickly through paid tax preparers than households with incomes of $60,000 or more. For example, we estimated that households with incomes between $20,000 and $29,999 were 34 percent more likely to use tax-time financial products than households with incomes of $60,000 or more; and households with incomes between $30,000 and $39,999 were 61 percent more likely to use the products than households with income of $60,000 or more. Moreover, our analysis of FDIC data suggests that households that received EITC were more likely to use tax-time financial products, compared to households that did not receive EITC. Our results also suggest that wealth, as measured by homeownership, was associated with the household decision whether to use tax-time financial products. Homeowners were 34 percent less likely to use tax- time financial products than non-homeowners, controlling for other factors. Other Characteristics, Including Race, Age, and Household Head Households of some minority groups were more likely to use tax-time financial products when filing tax returns than white households. For example, using FDIC data, we estimated that African-American households were 36 percent more likely to use tax-time financial products than white households after controlling for other factors. Other research (a 2013 study) found that African Americans were more likely to use refund anticipation loans than white individuals. According to our analysis of 2016 IRS data, which included information about tax-time financial product use and locality, use of tax-time financial products was more concentrated in some areas of the South and the West (see fig. 3). Our analysis of FDIC data further suggests that other characteristics associated with use of tax-time financial products include age and household type. For example, households headed by younger persons (15–39 years old) were more than twice as likely to use the products as households headed by older persons (60 or older), controlling for other factors. Households headed by single adults with families were more likely to use tax-time financial products than households headed by married couples. For example, according to our analysis of FDIC data, we estimated that households headed by unmarried females with families were 76 percent more likely to use tax-time financial products than households headed by married couples, controlling for other factors. Using IRS data from 2016, we found that a higher proportion of product users filed as unmarried heads of household, compared to the general tax filing population. Among those who used tax-time financial products, about 39 percent filed as single, 22 percent filed as married, and 37 percent as unmarried heads of household. Reasons for Using Refund Products Include Obtaining Cash Faster and Not Paying Tax Preparation Fees Up Front Reasons to use tax-time financial products include more quickly obtaining cash from the expected tax refund, not having to pay tax preparation fees out of pocket, and obtaining cash more cheaply than with alternative short-term funding options, according to our review of federal and industry reports. Quick Access Taxpayers generally might have to wait weeks for refunds from IRS: Taxpayers who file paper returns can expect to receive their refund about 6–8 weeks after the date on which IRS receives their return, according to IRS guidance. Taxpayers who file electronically generally can expect to receive their refunds within 21 days, or faster if they opt to have refunds deposited directly into their bank accounts. As previously discussed, IRS must delay payments of refunds on which EITC, ACTC, or both are claimed until at least February 15 of each year. Effectively, the refunds might not be disbursed to bank accounts (or prepaid cards) of tax filers until the end of the month. In contrast, users of tax-time products can obtain cash very quickly. For example, refund advance recipients generally receive loan funds within 24 hours of applying, and in some instances within the same hour they apply, according to selected tax preparer documents and websites that we reviewed. Refund transfer products also allow those who do not have the option of directly depositing refunds into a temporary account instead of waiting longer to receive a paper check. According to our analysis of IRS data from 2016, tax-time financial product users were more likely than other taxpayers to receive their tax refunds by direct deposit. Taxpayers may use tax-time financial products because they need cash quickly. Studies we reviewed found that product recipients tend to have pressing financial obligations. One study’s review of available literature from 2010 found that product recipients tend to live paycheck-to- paycheck or lack sufficient savings to cover prior, current, or future spending. Another study published in 2010 found that recipients use the products to pay for pressing financial obligations, both expected and unexpected, and for their tax preparation. According to the study, many users of tax-time products become delinquent on rent, utilities, and other expenses during the winter with the expectation that they will be able to pay obligations after receiving tax refunds. As one study found, the annual tax refund represents the largest single cash infusion received all year by about 40 percent of checking account holders. Tax Preparation Fees Not Paid Out of Pocket Lower-income taxpayers also use tax-time financial products to defer payment of fees related to tax return preparation, according to federal government and industry reports that we reviewed. Tax preparation fees vary greatly based on the tax forms used, including the EITC worksheet. One of the largest national tax preparation chains reported that its average tax preparation fee was between $205 and $240 in 2017. Free Filing Services The Internal Revenue Service (IRS) offers the following free filing services: Fillable forms. IRS offers forms that can be completed online and electronically submitted to IRS. The forms are available without age, income, or residency restrictions. Free file software. IRS, in partnership with the Free File Alliance (members of the tax software industry), provides free online filing options to eligible taxpayers. Twelve leading tax software providers make a version of their products available exclusively at IRS.gov for taxpayers with an adjusted gross income up to $66,000 (in 2018). Volunteer Income Tax Assistance. The program provides free basic income tax preparation with electronic filing by IRS- certified volunteers to qualified individuals, including to persons who earn $55,000 or less, have disabilities, or have limited proficiency in English. Tax Counseling for the Elderly. The program provides free tax preparation by IRS- certified volunteers to all taxpayers, particularly those 60 or older. Program volunteers specialize in pension and retirement-related issues unique to seniors. Consumers may perceive any costs associated with tax-time financial products and tax return preparation as lower than they actually may be because the costs are not paid out of pocket. Fees for the products and tax return preparation are deducted from the refund before it reaches the consumer. In general, studies have found that the transparency of a payment method affected the payer’s willingness to spend. One consumer advocacy organization representative posited that paying for tax-time financial products and tax preparation from a refund makes consumers less sensitive to the real cost of tax-time products and preparation services. Instead of using tax-time financial products to defer payment of tax preparation fees, lower-income taxpayers can access free filing services through several IRS programs (see sidebar). However, these options do not allow taxpayers to use tax-time financial products to access refunds faster. IRS estimates that about 70 percent of taxpayers are eligible to access its free filing software, and we estimated about 3 percent of taxpayers use this service. According to IRS officials, while IRS does not have a marketing budget to promote the free file programs, the predominant reason so few taxpayers use them is because there are many free tax preparation options on the market, such as tax preparation software. Higher Refunds and Tax Preparation Assistance Taxpayers also may use paid tax preparers because they do not think they can fill out tax returns on their own, believe that preparers will help them receive higher refunds, or both, according to federal government and industry reports we reviewed. For taxpayers who did not use tax-time financial products, we did not find a clear association between paid tax preparation and higher average refunds. On the other hand, for taxpayers who used tax-time financial products, we found that average tax refunds were higher for taxpayers who filed through paid tax preparers than for taxpayers who self-filed online (see table 2). According to IRS data, nearly all taxpayers who used refund loan products filed their taxes through paid tax preparers, as refund advances were not available online until the 2018 tax filing season. There may be various reasons for the association between higher refunds, paid tax preparation, and product use. Those who use tax-time financial products tend to be eligible for tax credits such as EITC, which can increase the size of tax refunds. Fifty- four percent of EITC claimants used a paid preparer. However, a 2017 study found that the combination of paid tax preparation and tax-time financial product use was associated with relatively high incorrect tax payments (specifically, overpayments of EITC compared to online self- filing and product use or no product use). Furthermore, our analysis of IRS data found that taxpayers who used tax- time financial products received higher refunds on average than those who did not use tax-time financial products, regardless of tax filing method—although other factors might explain this association. For example, taxpayers who have high refunds have a greater incentive to use the products than taxpayers who have relatively small refunds or owe taxes. Tax-Time Financial Products Cheaper Than Alternatives For lower-income taxpayers, tax-time products generally provide more cash at a lower cost than other small-dollar loan alternatives such as payday loans, auto title loans, and pawnshop loans, according to our review of federal government and industry reports. The amounts of alternative loan products are based on the value of the collateral the consumer provides. Average loan amounts are $150 for pawnshops, about $500 for payday loans, and under $1,000 for automobile title loans, according to industry statistics and CFPB and other studies. In contrast, refund advances were offered for up to $3,250 for the 2018 tax filing season. Furthermore, the alternative products generally include fees, unlike refund advances. For example, fees for payday loans generally range from $10 to $30 per $100 borrowed. Automobile title lenders generally charge a fixed price per $100 borrowed, with a common fee limit of 25 percent of the loan per month. In contrast, refund advances are offered at no cost to the consumer. Tax-time financial products also may be easier to access because, unlike alternative loans, they generally can be obtained without regard to credit history. However, tax-time financial products generally are only available during tax season. Loans provided by nonfinancial companies (often called fintech firms) are another source of short-term financing. However, fintech firms generally provide much larger loan amounts than tax-time financial products, and include fees, unlike refund advances. Providers We Reviewed Generally Disclosed Required Information but Some Disclosure Practices May Hinder Consumer Decision- Making The federal banking regulators oversee banks that offer tax-time financial products and IRS sets standards of practice for certain service providers (including some tax preparers). While our nongeneralizeable review found that selected banks and tax preparers generally followed existing OCC and IRS disclosure requirements, some tax preparers’ disclosure practices may present challenges for consumers trying to compare product options. Industry Participants Are Subject to Varying Levels of Oversight Banks and Settlement Service Providers FDIC, the Federal Reserve, or OCC are responsible for the safety and soundness supervision of banks within their authority (which offer tax-time financial products) and may have supervisory authority over third-party service providers (which provide settlement services). We identified five banks that partnered with several national tax preparation chains in recent years to offer tax-time financial products (refund transfers and refund advances). Of the five banks, FDIC supervised one medium-sized and one small bank, OCC supervised two medium-sized banks, and Federal Reserve supervised one medium-sized bank. As previously discussed, FDIC, the Federal Reserve, and OCC are to conduct full-scope, on-site risk-management examinations of each of their supervised banks at least once in each 12–18 month period. FDIC officials told us that its regular safety and soundness examinations may include an examination of the bank’s tax-time financial product offerings. OCC officials told us that they examine tax-time financial products in every annual examination of the banks they supervise that offer these products. Because the five banks each has total assets of less than $10 billion, the three regulators also are responsible for enforcing compliance with federal consumer financial laws (such as TILA and the Electronic Fund Transfer Act) that govern disclosure requirements for certain tax-time financial products. Officials from the regulators told us that they received few complaints about tax-time financial products offered by their supervised banks. We discuss the disclosure requirements and compliance with the requirements in more detail later in this section. The regulators’ consumer compliance examiners also may review a bank’s tax-time financial products—if, for example, a bank offers a new product or there are a number of consumer complaints about a current product. Examiners employ a risk-focused approach with a focus on consumer harm in selecting products to evaluate for compliance with applicable consumer laws and regulations. Furthermore, compliance examiners may decide, based on the potential for consumer harm and a bank’s compliance management system, that there is enough residual risk to scope the product into the examination. FDIC officials said that a bank with a lot of activity in the market for tax-time financial products would have to assure examiners that it had performed appropriate due diligence. Regulators also can take other oversight actions, ranging from enforcement to raising awareness among consumers. In 2015, CFPB took an enforcement action, along with the Navajo Nation, to ban an owner of four tax preparation franchises from the market and levy civil penalties for understating refund anticipation loan rates and deceiving customers about the status of their tax refunds. Our search of CFPB’s complaint database did not identify any consumer complaints on tax-time financial products. CFPB published a blog post in February 2018 that describes the different tax-time financial product options and the process for obtaining them, and cautions consumers to consider all fees, charges, and timing associated with the products. FTC staff we interviewed told us that supervision authority over many financial services providers has been given to CFPB, but that FTC still has the authority to enforce many financial statutes and rules, including rules administered by CFPB. FTC brought an enforcement action in 2017 against an online tax preparation provider alleging that it failed to secure consumer accounts. FTC officials also told us that, while they received numerous complaints on tax-related issues, FTC’s complaint database does not separately classify complaints based exclusively on tax-time financial products. FTC also has issued guidance to educate consumers regarding tax- related scams and other consumer protection issues that arise during tax time, and to businesses, including tax professionals, to help them detect cyber threats. FTC also co-sponsors a series of educational events for consumers and businesses surrounding tax identity theft awareness week. Software Developers Software companies we interviewed stated that they are subject to IRS regulations relating to electronic filing of tax returns. Software developers provide tax software to tax preparers so that they may file tax returns electronically and assist taxpayers in obtaining tax-time financial products. One software company told us that this involves working with IRS to ensure that returns can be electronically submitted, IRS can receive data, and the software is in compliance with IRS’s required data schemas. Tax Return Preparers IRS officials said that IRS does not monitor or have direct oversight authority over tax-time financial products, but requires some paid tax preparers to meet standards of practice or other requirements. The extent to which IRS has oversight over paid preparers depends partly on whether the preparer is a tax practitioner or unenrolled preparer. Tax practitioners are subject to regulations (Circular 230) that establish standards of practice. For example, practitioners must return tax records to clients, exercise due diligence in preparing tax returns, and submit records and requested information to IRS in a timely manner. IRS officials told us that they monitor the suitability of these practitioners and their adherence to the rules. Additionally, certain tax practitioners known as enrolled agents generally are required to pass a three-part examination and complete annual continuing education, while attorneys and certified public accountants are licensed by states but are still subject to Circular 230 standards of practice if they represent taxpayers before IRS. Alternatively, unenrolled preparers—the remainder of the paid preparer population and the majority of paid preparers—generally are not subject to these requirements. In 2011, IRS issued final regulations to establish a new class of registered tax return preparers to support tax professionals, increase confidence in the tax system, and increase taxpayer compliance. However, the U.S. District Court for the District of Columbia ruled in 2013 and the U.S. Court of Appeals for the District of Columbia Circuit affirmed in 2014 that IRS lacked sufficient authority to regulate all tax preparers. IRS officials also told us that all authorized IRS e-file providers have to follow certain requirements to be able to file tax returns electronically. Banks and Tax Preparers in Our Review Generally Followed Guidance for Disclosing Product Fees, but All Related Fees Were Not Always Disclosed Clearly or Early in Process We found selected authorized IRS e-file providers generally followed the requirements established by IRS on the disclosure of product fees, and banks generally followed the disclosure guidance relating to tax-time financial products issued by OCC. (We conducted nongeneralizeable reviews of website content, industry documents, and disclosures made during our undercover visits.) Two of the five banks we reviewed are regulated by OCC. One of the two FDIC-supervised bank and the Federal Reserve-supervised bank told us that they voluntarily follow OCC guidance. More specifically, IRS established the following disclosure requirements for authorized IRS e-file providers, generally known as EROs, that relate to tax-time financial products: EROs must obtain taxpayers’ written consent before disclosing any tax return information to other parties in relation to an application for a tax product. EROs must ensure taxpayers understand that if they use a tax product, the refund will be sent to the bank and not to them. If taxpayers choose to use a fee-based loan, EROs must advise that the product is an interest-bearing loan and not an expedited refund. EROs must advise taxpayers that the bank may charge them interest, fees, or both, in the case of any shortages on the refund. EROs also must disclose all deductions to be made from the expected refund and the net amount of the refund. In 2015, OCC issued risk-management guidance for national banks that offer tax refund-related products. This guidance advises that banks should specify to customers, as applicable, the total cost of the tax product, separately from the tax preparation cost; that total costs will be deducted from and reduce the refund amount; that tax refunds can be sent directly to the taxpayer without the additional costs of a tax product; that customers with deposit accounts can receive their refund without incurring fees through direct deposit in about the same time as it would take to receive a tax refund-related product; and the ongoing periodic maintenance and transaction fees related to any product intended for long-term use. In addition, OCC’s guidance establishes that banks should clearly disclose all material aspects of the product in writing before the consumer applies or pays any fees for a tax-time financial product. Also, representatives of the American Coalition for Taxpayer Rights, a group representing the leading tax preparation, tax software, and bank providers, told us that its members signed a joint statement with attorneys general from six states on disclosure practices for refund transfers. The member providers agreed to explain to taxpayers the different options for filing and receiving a tax refund, including no-cost options, and the associated costs and features of each option. The providers also agreed to disclose the optional nature of the products, the timing of the refund, and to present the disclosures in a clear and conspicuous manner understandable by a reasonable consumer. Our nongeneralizeable review of documents received from selected banks and tax preparers found disclosures generally followed OCC guidance or IRS requirements, respectively. However, our review of these documents and selected tax preparer websites also found—and our undercover visits of selected tax preparers suggested—that the level of transparency on product fees varied and product fees and information were not always clearly disclosed. Bank documents were more likely than information provided by paid preparers (in person or online) to include more disclosures about the fees and terms of tax-time financial products. For example, of the 12 bank documents we reviewed, all disclosed that funds would be sent to the bank if the taxpayer used a tax product. Almost all the bank documents disclosed the fees associated with the product and all disclosed that the fees would be deducted from the refund. In contrast, while written disclosure is not required, less than one third of ERO documents disclosed that the taxpayer using a tax-time financial product would receive funds from the bank instead of IRS. However, almost all the documents are presented to taxpayers after returns have been prepared and preparers have determined that taxpayers qualified for a product. The timing of when a tax preparer makes these disclosures would pose a challenge for taxpayers looking to compare prices for different providers. That is, they would not learn of the total fees—partly because the paid preparer could not determine the amount of some tax preparation fees until well into the preparation of the tax return. A taxpayer trying to determine the cost of using a tax refund to pay for online tax preparation services only would be able to compare the prices of two of the eight online providers we reviewed. The remaining six did not disclose this fee in a prominent way—with some disclosures made in small print or requiring navigation through several pages after the product page—or at all. A taxpayer choosing to file taxes using the services of a paid tax preparer in a brick-and mortar-location, and opting to use the refund to pay for tax preparation fees, would be unlikely to be able to compare prices among different providers. For example, during six of our undercover visits, our investigators explicitly requested literature on product fees. However, the preparers stated that they did not have the literature available or only provided us with business cards and other promotional material. Our analysis shows that providers do not consistently explain products or disclose fees to taxpayers. For example, providers told us, and industry documents show, that a refund transfer is not required to get a refund advance. However, during our site visits, tax preparers tied the use of a refund transfer to a refund advance four out of five times. In two of these cases, the tax preparer included the fee for a refund transfer as part of processing an advance product, while in another two cases the tax preparer said that a refund transfer was required with the advance. Also, during our site visits, three of the nine tax preparers did not disclose the cost of a refund transfer. Appendix III provides more information on our analysis of bank and tax preparer disclosure practices. According to industry participants, only taxpayers expecting a refund can qualify for a tax product; consequently, the tax preparer generally cannot determine whether the taxpayer qualifies until after the tax return is completed. Once this is determined, the tax preparer must request the taxpayer’s consent to offer a tax product. EROs with whom we met told us they may disclose fee information at various points throughout the process of tax preparation, and do so verbally or through their in-store computer interface. Bank disclosures are provided to the taxpayer before the product application has been submitted. Some researchers and representatives from consumer advocacy organizations with whom we met were concerned about the timing of disclosures of tax-time financial product fees. Consumer advocates said disclosures given to taxpayers were inadequate, unhelpful, or timed in such a way as to prevent meaningful comparison shopping. Specifically, one consumer advocacy organization said that taxpayers they serve do not understand the fees associated with filing through preparers. Representatives from another consumer advocacy organization said that taxpayers do not know the total cost for tax-related financial products and services until they already have taken steps to file their returns. In its 2017 Report to Congress, the National Taxpayer Advocate recommended that IRS require all e-file participants offering tax-refund financial products to provide a standard “truth-in-lending” statement to help taxpayers better understand the terms of the refund anticipation loan product. IRS did not adopt the National Taxpayer Advocate’s recommendation but agreed that e-file providers should be transparent about the costs associated with the loan products offered to taxpayers as part of the return preparation process. As previously discussed, courts have determined that IRS does not have sufficient authority to regulate individuals who are solely tax preparers and not licensed by IRS—in effect, the majority of the paid preparer population. Previously, we asked Congress to consider legislation granting IRS the authority to regulate paid tax preparers, if it agreed that significant paid preparer errors existed. As of March 2019, this Congressional action we have recommended remains open. The lack of consistency about the timing of fee disclosures for tax-time financial products may add to the rationale for Congress to consider regulating preparers. Such statutory authority could allow IRS to require that tax preparers make tax-time financial product disclosures or ensure meaningful transparency in the sale of the products. Conclusions For lower-income taxpayers with pressing financial obligations, tax-time financial products can offer an alternative to higher-cost short-term products such as payday loans. Taxpayers can purchase tax-time financial products from many tax preparers; however, according to our review of selected tax preparers and banks, the price and associated fees of these products can vary. And disclosure practices by some paid tax preparers may pose challenges for consumers looking to compare prices for different providers. IRS is an essential source for data on tax-time financial products, but to date IRS has offered limited options to tax preparers for accurately reporting usage of all available tax-time products. Furthermore, IRS has not informed tax preparers about changes made in reporting options and has not informed users of IRS’s product data about known issues with the data. Consequently, data on product usage are not reliable. Improving the quality of data collected on these products would help ensure that federal agencies, policymakers, regulators, consumer advocacy groups, and researchers have quality information to report on tax policy and consumer protection issues and inform their decision-making. Recommendations for Executive Action We are making a total of two recommendations to IRS. The Commissioner of Internal Revenue Service should communicate data issues regarding the refund anticipation loan indicators for tax years 2016 and 2017 and the refund transfer indicators since tax year 2016—for example, by attaching explanatory material to the dataset. (Recommendation 1) The Commissioner of Internal Revenue Service should improve the quality of tax-time financial product data collected; for example, by allowing authorized e-file providers to indicate more than one type of tax- time financial product for each return or by informing tax preparers of the addition of new product definitions and instructions on how to accurately code the products. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to IRS, FDIC, Federal Reserve, OCC, CFPB, and FTC for review and comment. IRS provided written comments, which are reproduced in appendix IV and discussed below. FDIC, Federal Reserve, OCC, CFPB, and FTC provided technical comments, which we incorporated as appropriate. In its comments, IRS concurred with both recommendations, and described how it planned to address them. In response to our first recommendation, IRS stated that it plans to provide the appropriate notations with the datasets. In response to our second recommendation, IRS stated that it plans to pursue programming changes and clarify instructions for tax return preparers to promote accurate coding of refund- related products. We believe that these actions, if implemented, would address our recommendations and improve the quality of data IRS reports on these products. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and IRS, FDIC, Federal Reserve, OCC, and FTC. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or clementsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report (1) describes trends in the market for tax-time financial products and product fees and examines the reliability of IRS data on these trends, (2) describes characteristics of those who use tax-time financial products and factors that influence the decision to obtain the products, and (3) describes regulatory oversight of industry participants and the disclosure of information on product fees and terms. To examine trends in the use of tax-time financial products, we used 2008–2018 Internal Revenue Service (IRS) data compiled from tax filings to determine the types and use of these products. We assessed the reliability of these data by interviewing IRS officials about the controls and quality assurance practices they used to compile these data. We determined the data alone did not provide a reliable count of refund transfers, refund anticipation loans, or refund advances in 2016, 2017, and 2018, but were adequate to suggest general trends when supplemented with other information. To supplement the IRS data, we collected information from reports issued by the National Consumer Law Center, reviewed Securities and Exchange Commission filings for two selected tax preparers, and interviewed representatives from National Consumer Law Center and both tax preparers on the offerings of tax-time financial products. We selected these preparers because they are major providers of tax preparation services and tax products. To identify and review trends in product offerings, we reviewed the websites, promotional materials, and other industry literature including Securities and Exchange Commission filings of a nongeneralizeable selection of four providers of online tax preparation services, three tax preparers with physical locations that also offer services online, and four banks. We also discussed changes in the market and product offerings with nine of the industry providers with whom we met. We accessed provider websites before and during the 2018 tax season. The tax preparation firms were selected because they are national tax preparation chains, and the five banks were selected because they partnered with the national tax preparation chains and major developers of tax preparation software. In addition, we reviewed studies related to these products published by GAO, federal agencies, four consumer advocacy and research groups, and two academic researchers. We used these studies primarily to corroborate findings from our data analysis. We focused on studies from 2010 and later; however, we also reviewed an older report to gain a greater understanding of how the market for tax-time financial products evolved. We identified these studies through expert recommendations and citations in studies. To examine trends in fees for tax-time financial products, we collected fee-related information from several different sources (because of limited publicly available industry data). All of the information cannot be used to generalize our findings to the retail tax preparation industry. Product fees. For 2018, we collected information on product fees from six paid tax preparers and four banks. For tax years 2014 to 2017, we used product fee information as reported by the National Consumer Law Center. For 2018, we also reviewed fee data from six providers of online tax preparation software, two that provide services in person and online, and four that only provide services online. We selected these providers after conducting internet searches and reviewing reports by consumer advocates and federal agencies. Data elements included fees for refund transfers and refund advances. For 2018, data elements also included the dollar amount for the incentives banks offered tax preparers for each refund transfer sold. Ancillary product fees. We collected information on ancillary product fees from four tax preparers, four banks, and three software developers for tax years 2017 and 2018. Data elements included fees for disbursement methods such as prepaid cards and paper checks and other charges related to the use of a tax-time financial product such as technology and transmission fees. Tax preparation fees. We collected information on tax preparation fees from eight tax preparers with physical locations and eight online providers of tax preparation services for 2018. Data elements included fees for federal and state filing. Aggregate fees. We collected aggregate tax-time financial product, ancillary product, and tax preparation fee information from studies issued by consumer protection advocates. We collected the above information from websites, advertising materials, and public filings with the Securities and Exchange Commission of tax preparers, banks, and software developers. To identify some of the demographic and economic characteristics of product users, we used data from the Bureau of the Census and the Federal Deposit Insurance Corporation (FDIC) from 2011, 2013, 2015, and 2017 to conduct a multivariate regression analysis to determine the influence of individual characteristics on the decision to obtain a product. We statistically controlled for various income, education, and demographic factors. While the FDIC data contain a rich set of demographic and economic variables, they include limited data on characteristics specifically related to tax filing. To identify specific tax-filing characteristics associated with product use, we also used a probability sample of data from IRS from the 2014, 2015, and 2016 tax years to calculate the percentages of taxpayers who used tax-time financial products according to various tax-filing characteristics, including tax filing status and tax filing method. We also used the sample data to calculate the percentage of taxpayers who used free filing services, including free file software, programs, and fillable forms. We reviewed documentation on and conducted testing of the data we used and determined they were sufficiently reliable for reporting economic, demographic, and tax-filing characteristics associated with product use. For more detailed information on our analysis of characteristics associated with tax-time financial product use, see appendix II. To better understand user characteristics associated with the decision to obtain a tax-time financial product identified by our analysis, we reviewed relevant federal and industry reports on the financial needs of individuals with characteristics similar to taxpayers who obtained these products. We focused on reports from 2010 and later. We also reviewed our prior studies and studies from the Consumer Financial Protection Bureau (CFPB) on alternative credit products and compared their features and fees to those of tax-time financial products. In addition, we interviewed representatives from consumer groups, four Low-Income Taxpayer Clinics, and IRS’s Taxpayer Advocate Service to obtain their perspectives on characteristics associated with tax-time financial product users. To describe the regulatory oversight of industry participants associated with tax-time financial products, we reviewed relevant federal laws and regulations, and reports and guidance documents from IRS and federal regulators, including the CFPB, FDIC, the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency (OCC), and Federal Trade Commission. We inquired about consumer complaint data related to tax-time financial products at the federal regulators and interviewed officials from the federal agencies and representatives from five tax preparation providers, five banks and bank affiliates such as settlement service providers, four consumer advocacy organizations, three software developers, two researchers, one provider of alternative financial services, and one industry group to gain their perspectives on the benefits and risks of the tax-time financial products and how any related concerns were being addressed. The tax preparation firms were selected because they are national tax preparation chains, and the five banks and three software developers were selected because they partnered with the national tax preparation chains. The four consumer advocacy organizations, two researchers, alternative financial service provider, and industry group were selected for their experience and to provide a range of perspectives. To review how product terms and fees are disclosed by tax preparers, in February 2018 GAO investigators acting in an undercover capacity visited a nongeneralizeable sample of nine randomly selected tax preparers in Washington, D.C., Maryland, and Virginia to inquire about tax-time financial products. We selected the two states and Washington, D.C. to ensure a mixture of state and local laws governing the products and providers. From the two states and Washington, D.C., we selected one metropolitan statistical area based on the concentration of product users and the proximity to lower-income households. We randomly selected three individual tax preparers in each of the three metropolitan statistical areas to visit, based on proximity to taxpayers in lower-income households and to ensure a mixture of urban and rural communities and company sizes. We visited offices of large tax preparation chains and single-office tax preparation businesses. Results cannot be used to generalize our findings to the retail tax preparation industry. Our investigators posed as taxpayers seeking tax preparation services who wanted to pay for the tax preparation fees with the expected refund or obtain an advance based on their anticipated tax refund. They requested available documents associated with tax preparation, refund advance and refund transfer products, and different disbursement options and fees. Because GAO investigators did not experience the tax preparation or the product application process, we were not able to assess the timing of any disclosures typically made after the tax return preparation process would begin. In addition, we received some consumer-facing disclosures and product agreements that were typically provided during the product application process from two tax preparers and two banks. We also conducted a content analysis of websites of eight selected tax preparers that offer tax-time financial products. The tax preparers were selected as national providers of tax preparation services with an online presence, and the results are not generalizeable to the retail tax preparation industry. Three of the providers offer tax preparation services online and through physical retail locations and five of the providers offer their services online only. We reviewed these websites to understand the extent to which they disclose fees to the taxpayer for tax preparation services, tax-time financial products, disbursement, and additional products or services, and to review the ease with which these disclosures are accessible. In addition to consumer-facing disclosures we received from providers with whom we met, we searched online for additional disclosures provided by the tax preparers and banks in our review and reviewed seven disclosures from two national tax preparation chains and 12 disclosures from five banks offering tax-time financial products. We then compared the disclosures against IRS and OCC requirements for disclosure for product terms and conditions. IRS established certain disclosure requirements for authorized IRS e-file providers. OCC instructs banks it supervises to make certain disclosures to product consumers. More specifically, we analyzed tax products and fee disclosures obtained from our undercover visits of selected tax preparers, online reviews, and directly from tax preparers and banks to determine the type and timing of disclosures made in these instances and whether they were consistent with IRS disclosure requirements and followed OCC guidance. We conducted this performance audit from July 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Appendix II: Analysis of Characteristics Associated with Tax-Time Financial Product Use This technical appendix outlines the development, estimation, results, and limitations of the econometric model and other data analysis we described in the report. We undertook this analysis to better understand the characteristics associated with the decision to obtain a tax-time financial product. Data Federal Deposit Insurance Corporation. To assess the characteristics associated with tax-time financial product use, we used data from the Federal Deposit Insurance Corporation’s (FDIC) National Survey of Unbanked and Underbanked Households for 2011, 2013, 2015, and 2017, which is a supplement of the Current Population Survey. We used the following variables on households and heads of households to examine how various demographic and economic characteristics are related to the use of tax-time financial products: Household income. Household type. Homeownership status. Race and ethnicity of the head of household. Educational attainment of the head of household. Age of the head of household. Head of household has children. Household used refund anticipation loan or a tax preparation service to receive a tax refund faster than the Internal Revenue Service (IRS) would provide it in the past 12 months. This is a dummy variable, which equals 1 if the household used products and 0 otherwise. A refund anticipation loan is a tax-time financial product. Based on our interviews and other research reports, refund anticipation loans and other tax-time financial products (including refund anticipation checks) may be used by consumers to get their tax return faster than IRS could provide it. We refer to this variable as “used tax-time financial product” for simplicity in the report, and we explain the relevant caveats and limitations below. This variable is the basis for the sample used for this analysis. See table 3 for the estimated distributions of these variables for all households, as well as households that used tax-time financial products in 2017. We also examined the relationship between the use of tax-time financial products and being unbanked, as well as the association between using tax-time financial products and alternative financial services (those offered outside the banking system). We used additional data from FDIC’s National Survey of Unbanked and Underbanked Households on the following variables: Household used other alternative financial services in the past 12 months, including nonbank check cashing, nonbank money orders, payday loans, and pawn shops. Household used prepaid card(s) in the past 12 months. Household was unbanked in the past 12 months. See table 4 for estimated distributions of household responses to questions related to unbanked status and usage of other alternative financial services for all households, as well as households that used tax- time financial products in 2017. IRS. To further identify tax-filing characteristics associated with tax-time financial product use and trends, we also used data from a probability sample of 2 percent of all electronically filed tax returns from IRS for tax years 2014, 2015, and 2016. In 2016, the sample size was 2,952,418, representing a population of 147,625,598 tax returns. According to IRS, the sample is representative of all electronically filed tax returns for the relevant tax years. In this sample, IRS provided data on the following variables: Tax filing method, including online (self-filed using tax software) or through a paid practitioner (including tax preparers with physical storefronts). Taxpayer used free filing services from IRS, including the Free File program and free fillable forms. Tax filing status, including single, married, and head of household. Disbursement options for tax refunds (direct deposit or paper check) or tax balance due. Tax refund amount. Tax year. Tax-time financial product use, including refund anticipation loans, refund anticipation checks, or no tax-time financial products. In tax year 2016, we estimated that about 18 percent of taxpayers used a tax-time financial product, plus or minus less than 1 percentage point. We also used IRS data from the Statistics of Income division for tax year 2016 to assess the geographical concentration of product use at the zip- code level. Zip code data from the IRS Statistics of Income division are based on population data that was filed and process by IRS in tax year 2016. Due to some data suppression from IRS for privacy purposes, zip codes with less than 100 tax returns are excluded from the data. As a result, in 2016 the total returns represented in the IRS zip code data are 145,302,140 and the number of tax returns with a tax-time financial product was 21,654,760, meaning about 15 percent of tax filing units in these data used a tax-time financial product. Methodology Regression analysis using FDIC data. Using FDIC data, we conducted a multivariate regression analysis to examine the relationship between each explanatory variable and tax-time financial product use. Specifically, we estimated multivariate logistic regression models. Regression models allow us to test significant relationships between economic and demographic variables and the likelihood of using tax-time financial products, while controlling for other factors. We used logistic regression models because our dependent variable is binary. The dependent variable represents whether a household used tax-time financial products. We collapsed “no” and “did not know/refused” into a single category for our regression analysis, so that the dependent variable is equal to 1 if the household used tax-time financial products and 0 otherwise. Logistic regressions allow the relationships between various characteristics and tax-time financial product usage to be described as odds ratios. Odds ratios that are statistically significant and greater than 1.00 indicate that households or heads of households with those characteristics are more likely to use tax-time financial products. Odds ratios that are less than 1.00 indicate that households or heads of households with those characteristics are less likely to use tax-time financial products. For categorical variables, this increase or decrease in the likelihood of product use is in comparison to an omitted category, or reference group. For example, the odds ratio for households headed by African Americans is statistically significant and 1.36. This implies that the odds of tax-time financial product use for households headed by African Americans are 1.36 times the odds of use for households headed by whites, holding other factors constant. Put another way, households headed by African Americans are about 36 percent more likely to use tax- time financial products than households headed by white individuals, if other conditions remain constant. This result and others are discussed further in the results section below. We also present 95 percent confidence intervals, which helps clarify the statistical significance of the odds ratios. Our baseline estimates were derived from logistic regressions that accounted for the survey features of the FDIC data. Our main regression results used data from the 2017 survey year. We also estimated logistic regressions using data from the 2015, 2013, and 2011 survey years, using the same variables when possible. Our baseline specification includes explanatory variables for race and ethnicity, education, age, household type, income, and homeownership. We used groups of indicator variables or categorical variables to control for all characteristics. In other specifications, we included controls for children, unbanked status, use of alternative financial services other than tax-time financial products, state indicators, and region indicators to check the robustness of our results. We also assessed the sensitivity of our analyses by restricting the analysis to households that only answered “yes” or “no” to tax-time financial product use. We excluded answers of “did not know/refused,” so that the dependent variable is equal to 1 if the household used tax-time financial products and 0 if the household did not use tax-time financial products. In a more limited analysis, we merged data from the 2017 FDIC data, which is the June 2017 supplement of the Current Population Survey, with the 2017 Annual Social and Economic Supplement, which is the March 2017 supplement of the Current Population Survey. We performed the additional analysis because the March 2017 supplement has data on tax-filing characteristics, including tax credits used by households. Given the structure of the Current Population Survey, some households were surveyed in both the March and June 2017 supplements, and those households comprise the sample used in this part of the analysis. We identified those represented in both supplements using household and person identifiers, as well as data on sex, race and ethnicity, and age. Using this merged sample, we estimated logistic regressions that both did and did not account for the survey features of the data. We included the same explanatory variables as our baseline estimates, along with indicators for use of the Earned Income Tax Credit, Additional Child Tax Credit, and Child Tax Credit. Analysis of IRS data. Using the 2 percent sample of IRS data, we estimated the percentages of tax filers with varying tax-filing characteristics by year and average refund amounts by year. All estimates are weighted at the tax filing unit level. Using the IRS’s zip code data from the Statistics of Income division for 2016, we calculated the number of total tax filing units and tax filing units who used tax-time financial product at the zip code level. Caveats and Limitations Regression analysis using FDIC data. Our results have limitations and should be interpreted with caution. For example, our analysis identifies correlations between characteristics and tax-time financial product use and not causal relationships. Moreover, there may be variables that are correlated with tax-time financial product use that are not included in our models. For example, we are not able to account for community characteristics that may influence the decision to use the products due to data limitations. We used statistical tests for multicollinearity (high intercorrelations among two or more independent variables) and goodness of fit to check the validity of the model to the extent possible, given the use of complex survey data. Our analysis of the characteristics associated with the use of tax-time financial products uses a relatively small number of observations. For example, we observe 798 households that used these products in the 2017 survey year, representing about 2.4 percent of households (plus or minus 0.2 percentage points), and that is the benchmark utilization rate against which the results should be interpreted. Moreover, IRS data indicate that more than 20 million tax filers used tax-time financial products in 2016, representing about 20 percent of tax filers who filed their taxes electronically. These data sets use different units of analysis, and there can be multiple tax filers in one household, especially for those who use Earned Income Tax Credit. However, comparing the two suggests that the survey data may not include all users of tax-time financial products. Given the question used to measure the dependent variable, our analysis focuses on those who use tax-time financial products to get their tax refund more quickly. While a key reason people use tax-time financial products is to meet cash needs, there may be other reasons people use the products, including covering the cost of tax preparation. Our results may not generalize to other time periods. There have been a number of changes in the market for tax-time financial products in recent years. Our results may not generalize to all products currently available in the market. However, our results from 2017 are generally similar with the 2015, 2013, and 2011 survey years, despite a number of changes to the tax-time financial product market during these years. Our findings suggest that similar types of households have utilized tax-time financial products regardless of industry and market changes, particularly if households used paid preparers and tax-time financial products to expedite their tax refunds. Our analysis focuses on households that used tax-time financial products and accessed them through paid preparers. However, taxpayers also may have accessed specific types of tax-time financial products when they used online software to file their own taxes. For example, individuals who file their own taxes online may use the products to cover the cost of the software that helps them prepare their taxes. The characteristics of people who use products for these reasons may be different than what we found in our analysis. Analysis of IRS data. The IRS data are representative of tax returns filed electronically and not of tax returns filed by other means, including by paper. The results may not generalize to years for which we do not have data. The indicators in the data for specific types of tax-time financial products, including the indicators for refund anticipation loans and refund anticipation checks have some significant limitations. In tax years 2014– 2016, IRS only allowed tax-time financial products to be coded as refund anticipation loans or refund anticipation checks (that is, there was no code to indicate that two or more products were used together). However, there were some major changes in the industry during this period, particularly with regards to refund anticipation loans, that suggest that these indicators do not measure the same types of products over time. Given the limitations of the definitions of specific tax-time financial products, most of our analysis focuses on the universe of tax-time financial products in the IRS data and not on differences by specific types of products. Results Regression analysis using FDIC data. Our analysis suggests a number of economic and demographic characteristics are associated with tax- time financial product use, particularly when purchased through a tax preparer to expedite the tax refund, after controlling for other factors. In 2017, relatively lower-income households were more likely to use the products than higher-income households. Households headed by single women with families were more likely to use tax-time financial products than households headed by married couples. Furthermore, householders who owned their homes were less likely to use tax-time financial products. African American households were more likely to use the products compared to white households. Finally, relatively younger households were more likely to use the products than older ones. The results of the main specification of our logistic regression are presented in table 5. Our results for other specifications using 2017 data were generally similar. For example, adding an additional control for unbanked status did not substantively change the results. In alternative specifications that included an indicator for use of other alternative financial services, we found a significant and positive correlation between using tax-time financial products and other alternative financial services, including nonbank check cashing, nonbank money orders, payday loans, and pawn shops. Moreover, including state and region indicators did not substantively affect the results. Using the sample restricted to just “yes” and “no” responses also did not substantively change the results. Our results for other years were generally similar, with some exceptions. For example, in other survey years prior to 2017, we found that in addition to African American households, Native American households also were more likely to use tax-time financial products than white households. Moreover, education and children were significant correlates in prior survey years. Analysis of IRS data. We found that nearly 1 in 5 taxpayers who filed their taxes electronically used tax-time financial products each year from 2014 to 2016, while less than 3 percent of filers used free filing services available through IRS during the same period. We also found that in 2016, tax-time financial product use was associated with receiving tax refunds through direct deposit, which is a faster way to receive a tax refund than paper check. Users of tax-time financial products also were more likely to file as heads of household (tax filing status) than taxpayers who did not use tax-time financial products. Moreover, taxpayers who used the products received higher tax refunds on average than taxpayers who did not use the products, especially when they used paid tax preparers to file their taxes. Finally, analyzing the zip code of the filers, we found that use of tax-time financial product was concentrated in some areas of the South and the West. Appendix III: Disclosure of Product and Related Fees and Terms Disclosure of Product Fees and Terms Our limited nongeneralizeable review of documents received from selected banks and tax preparers found disclosures generally followed Office of the Comptroller of the Currency (OCC) guidance or Internal Revenue Service (IRS) requirements for fees disclosure, respectively. However, we noted from our undercover visits of selected tax preparers that the extent and clarity of the disclosures offered to customers varied. Furthermore, in our review of selected tax preparers’ websites, we found that fees and information about products were not always clearly disclosed. Undercover Visits All nine tax preparers we visited offered the option to pay for the tax preparation fees with the tax refund by using a refund transfer, but they did not always clearly communicate how these options work. For example, three preparers did not disclose the refund transfer fee, and in a few instances, the refund transfer was provided alongside a refund advance and we were not given the option to pay for the tax preparation fees out of pocket. In other cases, the refund transfer fee was disclosed, but the product was not always identified as optional (that is, not required for tax preparation). During six of our undercover visits, our investigators explicitly requested literature on product fees. However, the preparers either stated they did not have the literature available or only provided us with business cards and promotional material. The other three times we did not ask for, and were not offered literature on product fees, features, or terms. In two of our visits, the tax preparers offered our investigators a refund advance after we expressed an interest in getting the refund quickly. In another two visits, we were offered unsolicited refund advances. When offering the product, these four tax preparers bundled the refund advance with a refund transfer (an optional product). By adding a refund transfer, the tax preparer effectively added a fee-based product to the refund advance, a product that otherwise is free to the taxpayer. During one of the visits, we were offered a refund advance only after we specifically asked for it. Website Content Analysis We reviewed the websites of eight selected providers of tax preparation services. We found that while these providers generally disclosed product fees, these disclosures were not made in a consistent manner. For example, all eight of the websites we reviewed offered taxpayers the option to use the expected refund to pay for tax preparation fees. Most of the time, the fee associated with this option was not clearly disclosed on the website. Only two of the eight providers clearly disclosed this fee on the products page; the other six did not disclose the fee in a prominent way or at all. In addition, all five providers that offered refund advances fully disclosed fee information for this product. Three of the eight online tax preparation service providers had physical locations in addition to their online presence. Of these three, only one disclosed on its website the refund transfer fee for taxpayers who filed a return in-person at one of their offices. For the second preparer with a physical presence, the refund transfer fee quoted for the online service was significantly lower than the fee we were quoted for in-person services at an office. The third preparer with a physical and online presence did not disclose the refund transfer fee for either the in-person service or online filing. Document Review We received and reviewed seven disclosure documents originated by two national tax preparation companies both of which are electronic return originators (ERO) and 12 bank documents from five banks in the industry. We compared the disclosure documents against IRS requirements for disclosure of fees for tax products and we compared the bank documents to OCC guidance related to disclosure of product, disbursement, and additional fees. Both sets of documents in our nongeneralizeable review generally disclosed the product fees in accordance with IRS requirements or OCC guidance as appropriate. Bank forms, including disclosures, are presented to taxpayers once they have decided to apply for a tax product. This practice is consistent with OCC’s guidance, which states that the details of a product should be provided to consumers before they apply for it. However, our analysis found that almost all of these documents are presented to taxpayers after returns have been prepared and tax preparers have determined the taxpayers were qualified for a tax-time financial product. The timing of when a tax preparer make these disclosures would make it challenging for a taxpayer to compare product prices from different providers or make more informed purchasing decisions. Moreover, all the ERO documents we reviewed with information on refund advances disclosed that the taxpayer would be receiving a loan and not a refund. However, of the six ERO disclosure documents that disclosed fees, four disclosed additional fees that might be associated with tax refund products, such as disbursement fees. Of the 12 bank documents we reviewed, all disclosed that funds would be sent to the bank if taxpayers used a tax product. Almost all the documents disclosed the fees associated with the tax product and that the fees would be deducted from the refund. And four of five documents related to a loan product disclosed that the taxpayer would be receiving a loan and not a tax refund. The majority of the documents also disclosed that the taxpayer may receive the refund directly from the taxing authority without incurring additional costs and within the same time frame without using a tax product. All the tax preparer documents and the banks’ disclosure documents were brief and written in plain language. However, almost all the bank application documents were longer than four pages and included technical and industry language. Disclosure of Disbursement Fees, Including on Prepaid Cards Based on our document reviews of selected tax preparers and banks and as suggested by our undercover visits of nine selected tax preparers, the disclosure of fees for disbursing funds was inconsistent, particularly around prepaid cards. Prepaid cards are often used to disburse funds from a tax-time product. Based on our analysis of providers’ promotional content, in some cases a tax preparer will offer prepaid cards as the only disbursement option. The cards generally carry additional fees for long- term use (such as monthly, withdrawal, reload, and inactivity fees). Prepaid cards usually are reloadable and can be used to pay bills and make retail purchases. IRS does not have guidelines for disclosing fees for the long-term use of prepaid card. However, OCC requires that banks disclose if a tax product may be used on a long-term basis and disclose fees associated with extended use of the product. During our visits, seven of the nine tax preparers provided the option to have the tax refund deposited on a prepaid card. However, only two of the seven preparers noted any potential fee information associated with the short or long-term use of prepaid cards. These two preparers said that there was no additional charge to have the taxpayer’s refund deposited on a prepaid card, and the other five did not explain whether any fees would be charged for this transaction. Five of the seven preparers that offered a prepaid card explained that the card could be used for transactions other than receiving the tax refund. However, only two of the five disclosed any fee information associated with long-term use of the card. Another two of the five preparers referred our undercover agents to the issuer of the card for additional information. The remaining preparer did not disclose that additional fees would apply to long-term use of the card. Four of the eight tax preparation websites we reviewed disclosed partial information about fees related to the disbursement of funds to the taxpayer. Three of the eight websites only disclosed disbursement fee information related to use of prepaid cards. We found fee information in one of the eight websites only after doing a word search. Fees associated with the long-term use of prepaid card fees were not disclosed by three of the six preparers that offered this disbursement option. Two websites disclosed partial fee information and only one disclosed all the fees and terms associated with the long-term use of a prepaid card. Six of these websites advised the taxpayer to see the terms and conditions of the card, four included a link to the terms and conditions of the card, and two did not include a link. Bank documents generally disclosed the fees associated with different disbursement methods such as paper checks and prepaid cards; however, fees related to the long-term use of prepaid cards were not always disclosed. Almost half of the documents we reviewed that include the use of a prepaid card did not acknowledge that fees were associated with the long-term use of prepaid cards, while others included only partial information or a general statement that “fees may apply.” Appendix IV: Comments from the Internal Revenue Service Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Karen Tremba (Assistant Director), Nathan Gottfried (Analyst in Charge), Jessica Artis, Maurice Belding, Evelyn Calderón, Farrah Stone, Kathleen McQueeney, Marc Molino, Neil Pinney, Barbara Roesmann, Jessica Sandler, Erinn Sauer, Erin Saunders-Rath, Michael Walton, and Helina Wong made significant contributions to this report.
Why GAO Did This Study American taxpayers spent at least half a billion dollars in 2017 on financial products—issued by banks, through paid tax return preparers—to help them file taxes and get advances or loans against tax refunds. GAO was asked to review tax-time financial products. Among other things, GAO (1) described market trends and examined IRS data, (2) described characteristics of product users and factors that influence product use, and (3) described product disclosure practices. GAO reviewed fee and product usage data; conducted a multivariate regression analysis to determine user characteristics; and analyzed disclosures of selected providers that are national chains and those of their bank partners. GAO conducted nongeneralizeable undercover visits of nine randomly selected tax preparers in the Washington, D.C. area to understand how they communicate fees and terms to taxpayers. Preparers were selected to ensure a mixture of regulatory jurisdictions, among other factors. GAO reviewed laws, regulations, and guidance on the products, and interviewed IRS and other government officials and a nongeneralizeable selection of product and service providers, tax preparation companies, consumer groups, and academics. What GAO Found Trends in the market for tax-time financial products since 2012 include the decline of refund anticipation loans (short-term loans subject to finance charges and fees), the rise in use of refund transfers (temporary bank accounts in which to receive funds), and the introduction of refund advances (loans with no fees or finance charges). More recent product developments include increased online access to products for self-filers, higher refund advance amounts, the introduction of new products, and for tax year 2019, the reintroduction of fee-based loans. However, GAO identified some limitations in Internal Revenue Service (IRS) data on product use, including over- or under-counting of certain types of products. IRS has not communicated these data issues to users and has not updated guidance to tax preparers on how to report new product use. As a result, data users (including federal agencies and policymakers) have inaccurate information to inform their findings and decision-making. Lower-income and some minority taxpayers were more likely to use tax-time financial products, according to GAO analysis of 2017 data from IRS, the Bureau of the Census, and the Federal Deposit Insurance Corporation. Specifically, taxpayers who made less than $40,000 were significantly more likely to use the products than those who made more. African-American households were 36 percent more likely to use the products than white households. Product users tend to have immediate cash needs, according to studies GAO reviewed. For these users, tax-time financial products generally provide easier access to cash and more cash at a lower cost than alternatives such as payday, pawnshop, or car title loans. GAO's undercover visits with nine tax preparers, its review of selected provider websites, and review of documents obtained from selected banks and tax preparers found disclosures generally followed requirements for disclosing fees. However, disclosure practices by some paid tax preparers may pose challenges for consumers. For example: Preparers in GAO's review generally indicated that they present taxpayers with almost all of the documents with fee information after their tax returns have been prepared and the preparers determined the taxpayers qualified for a tax-time financial product. The timing of these disclosures would pose a challenge for taxpayers looking to compare prices for different providers. During six of nine undercover visits, GAO investigators explicitly requested literature on product fees but were not provided such information. Refund transfer fee information on websites GAO reviewed sometimes was presented only after the tax preparation process started, was in small print, or could be found only after navigating several pages. As a result, taxpayers may face challenges comparing prices. What GAO Recommends GAO is making two recommendations to IRS to make the collection of product use data more accurate and make data limitations known to users of the data. IRS concurred with both recommendations.
gao_GAO-19-645
gao_GAO-19-645_0
Background The MIECHV program provides voluntary, evidence-based home visiting services for at-risk eligible families with children up to kindergarten entry. HRSA allocates MIECHV program formula grant funds to states based partly on the proportion of children under age 5 living in poverty in each state, among other factors. In fiscal year 2018, states received an average of $6.9 million in MIECHV program formula grant funding, ranging from $1.2 million provided to North Dakota to $21.4 million to California (see appendix I for a list of all states and their fiscal year 2016 through 2018 funding). Generally, the state’s public health or social services department is the lead agency that receives and administers the funds. States target MIECHV program resources to at-risk communities and have the flexibility to tailor the program to serve the specific needs of their communities. States are generally required to provide home visiting services using an HHS-approved evidence-based program model. Currently, HHS has determined through its Home Visiting Evidence of Effectiveness review that 18 evidence-based home visiting models meet HHS-established criteria for evidence of effectiveness, and are therefore eligible for MIECHV funding. States may select programs to implement from the models that have been approved by HHS, or states may choose to implement a home visiting service delivery model that qualifies as a promising approach, as defined in the statute. In MIECHV-funded home visiting programs, professionals meet regularly with families and provide services tailored to the families’ specific needs, such as teaching parenting skills, promoting early learning in the home, or conducting screenings and providing referrals to address caregiver depression, substance abuse, and family violence. According to HHS, the MIECHV program builds upon decades of scientific research showing that home visits by a nurse, social worker, or early childhood educator during pregnancy and early childhood have the potential to improve the lives of children and families. From fiscal years 2013 through 2018, the number of families served and number of home visits conducted nearly doubled (see table 1). The MIECHV program is the primary federal program focusing exclusively on evidence-based home visiting, according to HHS. However, in addition to administering the MIECHV program, states may have other home visiting programs that may be supported by funds from other federal programs, such as Temporary Assistance for Needy Families and the Maternal and Child Health Services Block Grant. These home visiting programs may provide services that differ from those provided under the MIECHV program. For example, states may provide home visiting services through these programs that use program models that are different from the MIECHV program models approved by HHS. The MOE requirement in the MIECHV program’s authorizing statute provides that funds provided to an eligible entity receiving a MIECHV grant “shall supplement, and not supplant, funds from other sources for early childhood home visitation programs or initiatives.” To demonstrate their compliance with this statutory requirement, states are required by HRSA to report in their annual grant applications their MOE spending for the prior fiscal year. HRSA provides guidance to states on how to report their MOE spending in the annual NOFOs. For example, since fiscal year 2013, the MOE guidance in the NOFOs generally has directed states to only report spending that meets the following criteria: paid for with state general funds, spent in the prior fiscal year on HHS approved evidence-based programs that include home visiting as a primary service delivery strategy, implemented in response to findings from the most current statewide needs assessment, and offered on a voluntary basis to pregnant women or caregivers of children from birth to kindergarten entry. Over time, HRSA has clarified the MOE guidance provided in the NOFOs to help address questions received from states, according to HRSA officials. We previously reported that certain grant design features affect the likelihood that states will use federal funds to supplement, rather than supplant (or replace), their own spending. One such design feature requires grant recipients to contribute their own funds in order to obtain grant funds. Requiring grant recipients to contribute their own funds can take the form of a match or MOE requirement. According to our prior report, matching grants typically contain either a single rate (e.g., 50 percent) or a range of rates (e.g., 50 to 80 percent) at which the federal government will match state spending on a particular program. An MOE requirement, in contrast, requires states to maintain existing levels of state spending on a particular program as a condition of receiving federal funds. Depending on the specific program and its MOE requirement, if a state did not previously spend any state funds on covered activities, then the state could be allowed to maintain MOE spending of $0. The MOE requirement is one of many MIECHV program requirements that HRSA is responsible for monitoring. HRSA also monitors MIECHV’s programmatic and technical requirements, such as evidence-based model implementation, policies and procedures, data collection, and organizational structure and capacity. HRSA also monitors fiscal and administrative requirements, such as those related to accounts payable and cash flow, accounting systems, and cost allocations. State-Reported Maintenance of Effort Spending Varied and HRSA Determined States Generally Met the Requirement From fiscal years 2016 through 2018, state-reported MOE spending varied from $0 to more than $25 million, according to our review of MIECHV program grant applications (see fig. 1). For example, 28 states reported MOE spending of $0 in fiscal year 2018. Most of the 23 states that reported MOE spending greater than $0 in fiscal year 2018 reported spending less than $3 million, while three states reported spending more than $9 million. See appendix II for each state’s reported MOE spending for fiscal years 2016 through 2018. State-reported MOE spending does not necessarily reflect all state spending on all home visiting services. When states report their prior year’s MOE spending on their MIECHV grant applications, they are only required to include home visiting spending if it meets the criteria specified by HRSA in the NOFO. In addition to reporting their MOE spending in grant applications, some states also noted that they spent funds on home visiting services that did not meet those criteria. In fiscal year 2017, for example, one state reported that it had spent funds on home visiting services for a non-evidence-based model (i.e., a model not approved by HHS), and the state also funded an evidence-based program with funds other than state general funds. However, the state did not include either in its reported MOE spending because that spending did not meet the criteria for MOE spending in the NOFO. An update to the MIECHV program’s MOE guidance in the NOFO for fiscal year 2018 further impacted some state reported MOE spending. The update clarified the MOE guidance, stating that states should only report MOE spending by the recipient entity administering the MIECHV grant, and not report spending by other state agencies. According to HRSA officials—because the states were now directed to exclude some previously reported home visiting spending—five states decreased their reported MOE spending to $0. In addition, three other states reported a decrease in their MOE spending ranging from about $1.2 million to about $9.3 million because of this change (see table 2). HRSA determined that states generally met the MIECHV program’s MOE requirement because there was no supplantation of federal funds, including in states that reported no MOE spending and those that reported decreased MOE spending from the prior fiscal year. States may be permitted to report $0 in MOE spending if the non-federal spending on home visiting does not meet the criteria in the MOE guidance in the NOFO. For example, if the state had not previously funded home visiting programs that met HRSA’s MOE criteria for the MIECHV program, then the state could maintain state spending of $0, according to HRSA officials. States may report MOE spending of $0 if state general funds were spent on a home visiting model that was not approved by HRSA, if the state supports an evidence-based home visiting program with funds other than state general funds, or if the state did not support a home visiting program prior to implementation of MIECHV. HRSA determined that state-reported year-to-year decreases in MOE spending did not constitute supplantation (or replacement) of state funds with federal funds, because as described more fully below, HRSA determined there were valid reasons for the decreased MOE spending, according to agency officials. Based on our analysis of grant applications, 15 states reported decreases in MOE spending from fiscal years 2016 through 2018 (see table 3). These decreases ranged from $75,000 to $71,539 in one state, and $25,207,294 to $0 in another state. According to HRSA officials, there were three different reasons why states might have reported a decrease in MOE spending compared to the prior year: 1. The state made a technical error in its MOE calculation that subsequently was corrected. For example, some states reported a decrease in MOE spending compared to the prior year because the state previously included erroneous funding sources, such as funding for a home visiting program that did not meet the MIECHV program’s MOE criteria. 2. Circumstances outside of the state agency’s control contributed to the state reporting decreased funding, such as when a state legislature authorized budget cuts that affected home visiting funding or failed to pass a budget. For example, according to HRSA officials, one state experienced state budget challenges in fiscal years 2016 and 2017, which resulted in decreased funding for some home visiting services. The officials said this funding would have been included in the state’s reported MOE spending and these budget reductions resulted in a reduction to the reported MOE spending from the prior year. 3. The clarification to the MOE guidance that HRSA made in the fiscal year 2018 NOFO limited the spending states should report, as previously discussed. HRSA Employs Several Methods to Monitor State Compliance with the MOE Requirement HRSA uses several methods to monitor the MIECHV program and the program’s MOE requirement is addressed to some extent as part of each, according to our review of HRSA grants monitoring documentation and interviews with HRSA officials. These monitoring methods include grant application reviews, site visits, and financial assessments, among others. The monitoring methods vary in terms of the extent to which the MOE requirement is specifically examined, who conducts the monitoring, and the frequency of monitoring (see table 4). The primary mechanism for monitoring the MIECHV program’s MOE requirement is the review of grant applications, according to HRSA officials. HRSA project officers review the MOE chart in states’ grant applications for 2 fiscal years to compare state reported MOE spending— actual non-federal expenditures—and determine if states maintained their level of spending (see table 5). If there is a missing MOE chart or potentially inaccurate MOE spending information, project officers work with states to resolve the issue. While HRSA primarily relies on its review of grant applications to monitor state compliance with the MIECHV program’s MOE requirement, the agency supplements these reviews with other monitoring techniques, and some of these techniques have identified issues with state-reported MOE spending. For example, operational site visits provide HRSA an opportunity to ask detailed questions about state-reported MOE spending and obtain supporting documentation. As a result of operational site visits, HRSA identified inaccurate state-reported MOE spending in some states. We reviewed four completed site visit reports from 2017—the most recently completed reports at the time of our review—and two of these reports had findings related to inaccurate state-reported MOE spending. For example, one site visit report noted that the state incorrectly included home visiting spending that did not use an evidence-based model in its reported MOE spending. HRSA also found some deficiencies with states’ reported MOE spending through the agency’s review of state single audits. According to HRSA officials, there were five state single audits with MIECHV MOE findings from fiscal years 2014 through 2017. We found that four of these audits identified deficiencies with how states monitored and accounted for their MOE spending. For example, one audit found that the state did not have internal controls in place to ensure that state spending met the minimum MOE requirement. In three of the four single audits that identified deficiencies, the state agencies concurred with the findings and prepared corrective action plans to address the deficiencies. As of June 2019, HRSA officials said they have taken steps, or are planning steps, to modify or provide additional guidance related to how the agency monitors the MOE requirement for the MIECHV program. Specifically: HRSA officials told us that beginning with the formula grant NOFO for fiscal year 2019, HRSA added an additional column to the MOE chart for states to provide the expenditures for the 2 years prior to the current fiscal year of the application. According to HRSA officials, this will streamline HRSA’s process to compare state-reported MOE spending across 2 prior fiscal years without having to go back to the previous year’s grant application. In February 2019, HRSA published an internal grants policy bulletin that specifically addressed MOE requirements and the agency’s monitoring of those requirements for all HRSA programs. HRSA is currently working on MIECHV program standard operating procedures that are intended to clarify staff monitoring roles and responsibilities across the agency. Completion of this resource is targeted for the end of fiscal year 2019. HRSA is also planning to add the MOE table to future MIECHV program Final Reports submitted by grantees, beginning with the fiscal year 2017 Final Report, which is due to HRSA in December 2019. According to officials, this will allow for a formal resubmission of MOE spending if there have been any changes since the submission of the most recent grant application. Agency Comments We provided a draft of this report to HHS for review and comment. HHS provided technical comments that we have incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Maternal, Infant, and Early Childhood Home Visiting Formula Grant Funding Washington, D.C. Appendix II: State-Reported Maintenance of Effort Spending Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Elizabeth Morrison (Assistant Director), Andrea Dawson (Analyst in Charge), David Reed, and Kelly Snow made key contributions to this report. In addition, key support was provided by Jennifer Cook, Sarah Cornetto, Thomas James, Jean McSween, Mimi Nguyen, Stacy Ouellette, Michelle Sager, Almeta Spencer, and Matthew Valenta.
Why GAO Did This Study The MIECHV program provides grants to states to support evidence-based home visiting services for at-risk pregnant women and parents with young children. HHS was appropriated $400 million per year for the MIECHV grant program for fiscal years 2018 through 2022. Families volunteer to participate in the MIECHV program and are provided regular home visits and support services from a nurse, social worker, or other professional. According to HHS, the program builds upon decades of scientific research showing that home visits during pregnancy and early childhood can improve the lives of children and families. States began receiving federal MIECHV program funds in fiscal year 2010, but many states provided home visiting services prior to the MIECHV program using state or other funds. To meet the program's MOE requirement, states are required to maintain home visiting spending that meets MIECHV program criteria. GAO was asked to review the MIECHV program's MOE requirement. GAO examined (1) what is known about the MOE spending reported by states that receive federal MIECHV program funds and (2) how HHS monitors states to ensure the MOE requirement is met. GAO reviewed MIECHV program notices of funding opportunity for fiscal years 2013 through 2018 and state grant applications for fiscal years 2016 through 2018, the most recent three years available. GAO also reviewed HHS grants monitoring documentation and interviewed HHS officials. What GAO Found From fiscal years 2016 through 2018, state reported maintenance of effort (MOE) spending varied from $0 to more than $25 million for the Maternal, Infant, and Early Childhood Home Visiting (MIECHV) Program, according to GAO's review of MIECHV program grant applications. The program's authorizing statute requires states to meet an MOE requirement. MOE requirements in federal programs generally require grantees to maintain a certain level of spending to ensure grantee dollars are not replaced with federal dollars. To demonstrate their compliance with the MIECHV program's MOE requirement, states report in their annual grant applications their MOE spending for the prior fiscal year. HHS determined that states generally met the MIECHV program's MOE requirement because states did not replace state funds with federal funds, including states that reported no MOE spending or decreased MOE spending. States may be permitted to report $0 in MOE spending in certain circumstances; for example, if a state's only home visiting spending was on programs that did not meet MIECHV program criteria. According to HHS officials, state-reported decreases in MOE spending were due to errors in calculations that were subsequently corrected, clarifications to HHS's MOE guidance, or because of circumstances outside of the state agency's control. HHS uses multiple methods to monitor state compliance with the MOE requirement, according to GAO's review of HHS documentation and interviews with HHS officials. The agency's monitoring strategy includes reviews of grant applications, reviews of state single audits, and operational site visits, among other techniques. According to HHS officials, grant application reviews are the primary mechanism used to monitor state compliance, through which HHS compares state-reported MOE spending in grant applications across two fiscal years to determine if states maintained their level of spending. In addition, HHS identifies and resolves issues with state-reported MOE spending through its operational site visits and the agency's review of state single audits.
gao_GAO-19-264
gao_GAO-19-264_0
Background An underride crash can occur during a collision between a passenger vehicle and a large truck—a tractor-trailer or a single-unit truck, such as a delivery or dump truck—if the height difference between the vehicles is sufficient to allow the smaller vehicle to slide under the body of the truck. The front and rear of passenger vehicles are designed to crumple in a crash and absorb the main force of an impact, while sensors detect the impact and activate safety features within the passenger compartment, such as air bags and seatbelt pretensioners. However, the point of impact in an underride crash could be the hood of the passenger vehicle or—more severely—the windshield. Such impacts can result in “passenger compartment intrusion” by the large truck into the passenger area of the smaller vehicle. This intrusion can kill passengers or leave them with severe head and neck injuries. Underride guards on large trucks essentially lower the profile of the truck’s body to be more compatible with that of a passenger vehicle. An underride guard designed to withstand the force of a crash can prevent the car from sliding under the truck and provide an effective point of impact that will activate the car’s safety features to protect the car’s occupants. Figure 1 shows images from a video depicting the difference in underride crashes with and without passenger compartment intrusion on the rear of a tractor- trailer. Rear and side underride guards limit a passenger vehicle’s ability to go under those areas of a trailer in a crash (see fig. 2). Front guards— currently used on tractors in some other countries, such as European Union countries—can reduce the likelihood that a truck would ride over a passenger vehicle in a crash, a situation sometimes referred to as “override”. In addition to saving lives and reducing serious injuries, improving traffic safety—including reducing underride crashes—may provide other benefits to society. Specifically, NHTSA has reported that preventing such crashes may result in savings in police and crash investigation resources and reduced property damage, among other things. Federal requirements, in regulations issued by NHTSA and FMCSA, exist for the installation of rear guards on most large trucks, but there are no federal requirements for side or front guards. NHTSA’s mission is to “save lives, prevent injuries and reduce economic costs due to road traffic crashes through education, research, safety standards and enforcement activity.” As part of this mission, NHTSA requires that rear guards be installed on most trailers. Federal regulations requiring rear guards of specific dimensions date back to 1952, but the most current regulations—which set force and energy absorption standards, in addition to dimensional requirements—became effective in 1998. These crashworthy rear guards must be designed and tested to protect occupants in a crash of up to 30 miles per hour. In December 2015, NHTSA published a notice of proposed rulemaking (NPRM) that proposed to align U.S. regulations with stronger Canadian rear guard standards. The Canadian standard includes a stronger energy absorption requirement: 20,000 joules—a measurement of energy—as compared to 5,650 joules in the U.S. NHTSA has not taken action on this NPRM since it was proposed in December 2015. Single- unit trucks that are more than 30 inches above the ground are required to meet the dimensional specifications for rear guards set in 1952 but are not required to meet any force or energy absorption standards. NHTSA introduced an advance notice of proposed rulemaking (ANPRM) in July 2015 that considered requiring rear guards with strength and energy absorption criteria for all newly built single-unit trucks. However, NHTSA has since withdrawn the ANPRM, stating that—based on the comments received as well as analysis of the petitions—the changes being considered were not justified. Although there are no federal requirements for crashworthy side underride guards, some crashworthy side guards are being developed. For example, one aftermarket manufacturer has developed a side underride guard that was crash-tested by IIHS and successfully prevented underride crashes in tests at 35 and 40 miles per hour. Similar looking technologies—including aerodynamic side skirts and pedestrian/cyclist side guards—are installed on some trailers and single- unit trucks, but they are not meant to mitigate underride crashes (see fig. 3). FMCSA’s primary mission is “to reduce crashes, injuries, and fatalities involving large trucks and buses,” and it does this, in part, through developing safety regulations. These regulations include requirements for rear guards for trailers consistent with Federal Motor Vehicle Safety Standards and for single-unit trucks that are more than 30 inches above the ground, as well as for multiple types of commercial vehicle inspections that are performed by, for example, motor carriers and drivers to ensure that commercial vehicles are safely operating. Table 1 describes the types of commercial vehicle inspections. For fatal crashes, including fatal underride crashes, data are collected by law enforcement officials at the location of the crash, aggregated at the state level, and then transferred to NHTSA’s Fatality Analysis Reporting System (FARS). FARS is a census of all fatal traffic crashes in the U.S. When a fatal crash occurs, a state or local police officer typically completes a crash report form unique to each state. These forms can include a variety of data fields, such as the time of the crash, weather conditions, and the number of killed or injured persons. In the case of an underride crash, officers may indicate an underride crash occurred in a specific field for recording this crash type or in a narrative field. FARS analysts—state employees who are trained by NHTSA’s data validation and training contractor to code state crash data for input into FARS—in each state receive and analyze the data in the crash report forms in order to compile a record of the fatal crash. FARS analysts rely on the information within the crash report form in order to enter accurate data. To encourage greater uniformity of crash data, NHTSA, FMCSA, and other agencies and associations cooperatively developed the Model Minimum Uniform Crash Criteria (MMUCC) in 1998. The MMUCC guideline, currently in the fifth edition, identifies a minimum set of motor vehicle crash data elements and their definitions that states should consider collecting, but are not required to collect. The MMUCC is updated about every 4 to 5 years. Prior to publication of each edition, an expert panel from the relevant agencies and associations convenes to review all proposed changes suggested by traffic safety stakeholders to determine what will be included in the MMUCC. According to NHTSA officials, the next updated version of the MMUCC is expected to be issued in 2022. Underride Crash Fatalities Reported by NHTSA Data Are Relatively Low but Are Likely Undercounted Although Reported Underride Crash Fatalities Represent a Small Percentage of Total Traffic Fatalities, Underride Crashes Present a Greater Risk of Fatalities or Serious Injuries From 2008 through 2017, the annual number of fatalities resulting from underride crashes involving one or more trucks reported in FARS ranged between 189 and 253, resulting in an annual average of approximately 219 fatalities (see table 2). Comparatively, the FARS data show an annual average of about 34,700 total traffic fatalities and approximately 4,000 fatalities involving large trucks over the same period. Therefore, reported underride crash fatalities on average accounted for less than 1 percent of total traffic fatalities and 5.5 percent of all fatalities related to large truck crashes during this time frame. Although reported underride crash fatalities make up a small proportion of total traffic fatalities, NHTSA officials told us that severe underride crashes—involving passenger compartment intrusion—are more likely to result in a fatality or serious injury than crashes in which the passenger vehicle’s safety features engage and are able to protect the occupants. Officials from four state DOTs we spoke to also stated that while underride crashes are not common, the consequences—fatalities or serious injuries, including head or neck injuries—are more likely to be severe. An official from one state DOT noted that their agency did not consider underride crashes to be a high priority issue. However, upon further review of the state’s underride crash data, this official stated that while underride crashes may occur infrequently, they present a higher risk of fatality than the official had previously realized. An official in another state told us they do not regularly review underride crash data but, upon analysis of the data, found that underride crashes constituted a larger percentage than they anticipated—16 percent—of all fatal large truck crashes in the state in 2017. NHTSA’s FARS data show that most of the reported underride crash fatalities occurred when the crash impact was located at the rear or sides of a trailer. From 2008 through 2017, approximately 45 percent (825 of 1836) of reported fatalities in underride crashes with a recorded point of impact on the large truck occurred when the initial impact of the crash was the rear of the trailer. About 32 percent (590 of 1836) of reported underride crash fatalities were in crashes where the side of the trailer was the point of initial impact. Approximately 21 percent (392 of 1836) of reported underride crash fatalities were in crashes with the initial impact at the front of the tractor. These 392 fatalities from crashes involving the front of a tractor could be crashes in which the tractor impacted the rear of a passenger vehicle but might also have occurred in a head-on collision between the car and the tractor. The point of impact for underride crash fatalities with passenger compartment intrusion—the most severe form of underride—had similar distributions, with most reported fatalities occurring when the initial point of impact was the rear or side of the trailer. State and local police officials we interviewed said that the underride crash fatality cases they are familiar with occurred in high speed scenarios, often exceeding 55 miles per hour. For example, officials representing a state police department described scenarios in which passenger vehicles traveling at high speeds rear-ended tractor-trailers stopped on the highway’s shoulder or slowed for highway construction; similar scenarios occurred when tractor trailers failed to slow for stopped traffic and crashed into the rear of passenger vehicles. However, on average, 62 percent of fatalities from underride crashes with passenger compartment intrusion reported in 2008 through 2017 did not include a reported speed. For example, for these fatalities in 2017, 72 percent had speed coded in FARS as missing or not reported. A state and a local police official told us that determining the speed of an underride crash can be challenging due to the often severely damaged condition of the passenger vehicle following an underride crash. Officials representing state police said that they are better able to document whether or not speeding was a factor in an underride crash, rather than an exact speed. IIHS representatives also acknowledged the difficulty in documenting the speed involved in an underride crash, and further stated that this difficulty brings into question the accuracy of the speed data that are recorded in FARS for underride crashes. Variability in the Data Collection Process Likely Leads to Underreporting Stakeholders we interviewed told us that underride crash fatalities are likely underreported in FARS due to several factors, such as variability across states in defining underride crashes, inconsistencies in state crash reporting forms and documentation methods, and limited information provided to state and local police on how to consistently identify and record underride crash data. These factors could contribute to police officers incorrectly and inconsistently documenting underride crash data on the crash report form. As a result, FARS analysts may not have sufficient information to properly categorize the crash as an underride, ultimately affecting the number of underride crash fatalities identified in FARS. Standards for Internal Control in the Federal Government notes that management should use quality information to achieve the entity’s objectives. Underreporting of underride crashes would affect the quality of NHTSA’s data, thereby affecting the agency’s ability to accurately identify the magnitude of underride-related crashes and limiting its ability to make informed decisions on rulemaking or other efforts that would help the agency meet its mission to improve traffic safety. Other researchers and organizations have also commented on the quality of NHTSA’s underride crash data. For example, IIHS representatives told us that they compared underride crash cases in FARS and in NHTSA’s and FMCSA’s Large Truck Crash Causation Study—a study of large truck crashes from 2001 through 2003—and identified some cases that involved underride crashes but that were not categorized as such in FARS. Consequently, IIHS representatives stated that they have used more general rear impact crash data as a proxy for underride crashes due to their finding that underreporting of underride crashes occurs in FARS. Additionally, the University of Michigan’s Transportation Research Institute reported that it can be difficult or impossible to identify underride in available computerized crash data files, such as FARS. Variability in Underride Crash Definition State and local police officers do not use a standard definition of an underride crash when collecting data at the scene of a crash. NHTSA officials told us that the agency’s definition for an underride crash—”a vehicle sliding under another vehicle during a crash”—is found in the FARS coding and validation manual, a document primarily used by FARS analysts and researchers. The FARS coding and validation manual further distinguishes underride crashes as those with and without passenger compartment intrusion. The MMUCC, which includes definitions of various crash-related elements, does not include a definition of an underride crash. Among officials from the five state police departments we interviewed, underride crash definitions varied, even within states. For example, in one state, an official from one local police department said that a passenger vehicle would need to have over 50 percent of its hood underneath the trailer to constitute an underride crash, while other officials within the state police used a broader definition consistent with NHTSA’s definition, i.e., a vehicle going underneath another vehicle by any amount. A state police official and a local police official we interviewed indicated that they would like a clearer definition of the conditions that constitute an underride crash to help them better identify these crashes. Further, representatives from NHTSA’s data validation and training contractor told us that when they have identified anomalous patterns in underride crash data in FARS, the main reason for these anomalies has been varying definitions of this crash type, as reporting officers have many interpretations of what constitutes an underride crash. A standard definition of an underride crash, for example in the MMUCC, would provide greater assurance that underride crashes are accurately recorded. Inconsistency in State Crash Reporting Forms and Documentation of Underride Crashes While all states have a crash report form to gather data following a crash, these state forms vary in whether and how underride crash-related information is collected. Specifically, for the most recent crash report forms we examined from the 50 states and the District of Columbia, as of October 2018: 17 state forms have a specific field for “Underride.” Eleven of these forms also have data fields for passenger compartment intrusion. 32 state forms have a point of impact or area damaged field for “undercarriage.” The point of impact field is generally intended to be used to indicate the locations of initial impact or area that was damaged for all vehicles involved in the crash. Some state police and transportation officials we spoke with noted that this field could be used to indicate that an underride crash occurred, as the initial point of impact on a large truck could be the undercarriage in such a crash. Two states, California and Hawaii, do not have a data element related to underride crashes or undercarriage on their state crash report forms. The presence of an underride field in state crash report forms may affect the extent to which underride crash fatalities are captured in FARS. For example, we observed that after a state revised its form to remove the underride field, the number of reported underride crash fatalities significantly decreased, potentially indicating that underride crashes were being underreported after the change. Conversely, in another state, we observed that the number of reported underride crash fatalities significantly increased following the addition of an underride field to the crash report form, potentially indicating that underride crashes were being reported more accurately following the change. States have their own discretion to develop crash report forms based on several factors that may be particular to each state. For example, states include or exclude certain data elements on their crash report forms based on the traffic safety priorities within that state. Officials we interviewed from two state police departments told us that they do not have an underride field on their crash report forms because underride crashes are not a traffic safety priority for them. In another state, state DOT officials told us that they chose to include an underride field on the crash report form to better align with the FARS data fields, including those fields related to underride. States may include certain data elements on their crash report form based on the recommended data elements in the MMUCC. However, while the MMUCC was developed to encourage greater uniformity of crash data, its guidelines are voluntary, and it does not currently include references to underride or override crash data elements. In its June 15, 2017, report, the Post-Accident Report Advisory Committee—a group appointed by the FMCSA Administrator to provide input on additional data elements to be included in police accident reports involving commercial motor vehicles—suggested that MMUCC data elements be updated to include a collection of information about whether underride and override are involved in a crash. However, according to the MMUCC’s standard development process and NHTSA officials, to adopt new data elements, the entire MMUCC expert panel—which is comprised of stakeholders representing NHTSA, FMCSA, the Governors Highway Safety Association, states, data collectors, data managers, data users, and safety stakeholders—must reach at least 70 percent agreement for approval of new changes to the MMUCC. Under the MMUCC’s standard development process, the MMUCC expert panel will consider recommendations and proposed changes to the MMUCC guidelines, including those proposed by NHTSA in the months preceding the next MMUCC update in 2022. In states that do not include a specific underride crash field in the state crash report form, state and local police officers we interviewed told us that officers responding to a crash may describe underride crashes in the diagram or narrative fields of the form. However, these officers said that a police officer may inappropriately document an underride crash as a rear impact crash. Similarly, officers may categorize the crash as both an underride and an override crash, which NHTSA’s FARS coding and validation manual indicates would be incorrect. Selected state officials told us that unless the officer documenting the crash specifically describes an underride crash in the narrative field, FARS analysts at the state level who review the crash report forms will not have the information to know if a crash involved underride. Police officers we interviewed in states that include “undercarriage” rather than a specific underride crash field in the crash report form told us that they may use the option as a proxy for an underride crash; however, this field may be used inconsistently. For example, in one state, state police officers said they would select “undercarriage” on the crash report form to reflect an underride crash, whereas a local police officer in the same state said that local officers would not use that field to identify an underride crash occurred and, instead, would document the underride crash in the narrative. NHTSA’s data validation and training contractor told us that it is not a recommended practice for officers to select “undercarriage” as a proxy for underride crashes, noting that this inconsistency could lead to inaccuracies in the resulting FARS data. Including underride as a recommended data field in the MMUCC would provide greater assurance that underride crashes are accurately recorded. Limited Information Provided to Police State and local police officials we interviewed said that they receive limited or no training on how to identify and record information for underride crashes. Officials from all five state police departments we spoke with said that they develop their own crash reporting training for police. This training emphasizes overall crash reporting with a limited focus, if any, on underride crashes. An official representing one state police office said that the state police provide training on how to complete crash reports and general traffic safety, whereas FARS analysts—often within the state DOT—are concerned with the quality of data collection for data analysis purposes, which is not a primary focus of law enforcement training. State and local police officials we interviewed said they generally have limited to no follow-up or continuous training on crash reporting beyond initial police academy training. Local police we interviewed also told us that while they develop and implement their own crash report training, they may also receive training from the state police. Some state police officers that we spoke with said that they conduct training for local police departments when requested. One local police official we spoke with said that officers have limited exposure to underride crashes in these training sessions and that the average officer would likely not know how to appropriately identify an underride crash. Officials we spoke with from three state and two local police departments stated that additional information to police departments on underride crashes could help improve data collection and overall traffic safety. NHTSA provides training to FARS analysts on reviewing crash report forms and appropriately inputting data in FARS, but does not provide information on crash data collection to state and local police who initially collect the data. According to NHTSA’s data validation and training contractor, the contractor trains FARS analysts on identifying underride crashes. Specifically, the contractor trains FARS analysts to review the crash report forms for sufficient detail to meet the definition of an underride crash and determine if a crash involved underride for entry in FARS. NHTSA officials told us that it is the responsibility of state police academies to train law enforcement officers to conduct on-site investigations and complete crash report forms. NHTSA officials said that they do not currently provide underride identification information directly to state and local police who initially collect the crash data. However, NHTSA does provide information to state and local police on other topics, such as improving traffic safety and driver behavior, for example through DOT’s Enforcement and Justice Services Division. NHTSA officials acknowledged that it would be feasible to also provide information on identifying and recording underride crashes. Standards for Internal Control in the Federal Government notes that management communicates quality information externally through reporting lines so that external parties can help the entity achieve its objectives and address related risks. By providing information to state and local police departments—such as materials or instruction on the definition of an underride crash and how to appropriately document these crashes— NHTSA could improve the quality and completeness of underride crash data that police collect. Underride Guards Are in Varying Stages of Development, and Gaps Exist in Inspection and Research Underride guards for the rear, side, and front of tractor-trailers and single- unit trucks are in varying stages of development. NHTSA has issued an NPRM proposing to strengthen rear guard requirements for trailers, and estimates that about 95 percent of all newly manufactured trailers already meet the stronger requirements. While FMCSA requires commercial vehicles to be inspected to ensure they are safe, rear guards may not be regularly inspected. Side underride guards are being developed, but stakeholders identified challenges to their use, such as the stress on trailer frames due to the additional weight. NHTSA has not performed research on the overall effectiveness and cost of these guards, and manufacturers we interviewed told us that they are hesitant to invest in developing side underride guards without such research. In response to a 2009 crash investigation, the National Transportation Safety Board (NTSB) recommended that NHTSA require front guards on tractors. NHTSA officials stated that the agency plans to complete research to respond to this recommendation in 2019. However, stakeholders generally stated that the bumper and lower frame of tractors typically used in the U.S. may mitigate the need for front guards for underride purposes. NTSB has further recommended that NHTSA develop standards for crashworthy underride guards for single-unit trucks—such as dump trucks—but NHTSA recently concluded that these standards would not be cost effective. Most Newly Built Trailers Are Equipped with Rear Guards That Exceed NHTSA Requirements All seven of the eight largest trailer manufacturers—which are responsible for about 80 percent of the trailers on the road in the U.S.—we spoke with told us that they have been building to the stronger Canadian rear guard standard since those requirements became effective in 2007. Some manufacturers said that since trucking company operations may span the border between Canada and the U.S., it was easier to build to a single standard rather than manufacture trailers that comply with either the Canadian requirements or the U.S. requirements. NHTSA is considering strengthening the U.S. requirements for rear guards to align with the Canadian rear guard standards. As part of the 2015 NPRM on strengthening the U.S. requirements to the level of the Canadian standards, NHTSA estimated that 93 percent of all newly manufactured trailers in the U.S. are already equipped with a rear guard that meets the Canadian standard. In July 2018, NHTSA officials told us that figure had increased to 95 percent of all newly manufactured trailers, with the remaining 5 percent from smaller manufacturers who may not wish to incur the additional cost or weight of a Canadian-style rear guard. Trucking industry stakeholders told us that the average lifecycle of a trailer varies: one said the lifespan is 10 to 15 years and another stated a 12-year lifespan. NHTSA performed a cost-benefit analysis as part of the 2015 NPRM in which it preliminarily estimated that requiring newly manufactured trailers to include rear guards built to the new standard would be cost-beneficial. Specifically, NHTSA’s analysis found that the cost of a rear guard that meets the Canadian standard was approximately $500 per trailer, which was $229 more than a guard that complies with the existing U.S. requirement. NHTSA’s analysis also found that a Canadian-style rear guard was heavier than its U.S. counterpart. The rear guard NHTSA studied that complies with current U.S. regulations weighed 172 pounds, whereas those meeting the Canadian standard weighed between 191 and 307 pounds. Regarding benefits, NHTSA estimated in 2015 that— accounting for the trailers that already meet the stronger standard— adopting the Canadian standard would prevent about one fatality and three serious injuries per year. According to DOT, these estimates may have since changed, as a higher percentage of trailers are now manufactured to meet the Canadian standards. Comments on this NPRM varied. Some comments were in support of the measure, citing the safety benefits. Other comments noted that automated driver assistance technology may offer better outcomes. Further, some comments called for NHTSA to take additional steps to improve the safety capabilities of rear guards, such as allowing fewer exemptions from compliance. NHTSA has not taken action on this NPRM since it was proposed in December 2015. NHTSA officials we interviewed could not provide information on when the NPRM would move forward. The largest trailer manufacturers have also taken steps to further improve the design of rear guards to prevent underride crashes in a range of scenarios. Because IIHS found that the weakest points for rear guards are generally the outer edges furthest from the center of the guard, it created a procedure to test the ability of rear guards to withstand crashes at different overlap points, starting at the center of the guard and moving closer to the endpoints. Specifically, this procedure involves three crash tests using full width, 50-percent, and 30-percent overlap of the front of the car with the rear guard, as depicted in figure 4. According to IIHS, as of September 2018, all of the top eight trailer manufacturers operating in the U.S. have successfully passed these tests. Some of these manufacturers provide the improved rear guards as a standard feature on all new trailers, while others offer them as an option for purchase. In addition to strengthening rear guards on trailers, advancements in automatic braking systems in passenger vehicles may help reduce the frequency of underride crashes. These systems, though not federally- required, have been available and installed in some passenger vehicles and tractors and are designed to detect objects or other vehicles in front of the vehicle and automatically apply the brakes to avoid or lessen the severity of an impact. According to NHTSA, twenty automakers representing more than 99 percent of the U.S. automobile market have agreed to make automatic braking systems a standard feature on newly- built passenger vehicles starting in 2022. These braking systems may help reduce the number of passenger vehicles striking the rear of tractor- trailers, potentially reducing the frequency of underride-related crashes, fatalities, and injuries. Rear Guards in Use on Roads May Not Be Regularly Inspected FMCSA regulations require commercial vehicles operating in interstate commerce to be inspected to ensure they are safe. However, the rules do not specifically include an inspection of the rear guard. After a rear guard has been installed on a new trailer, stakeholders told us that the guard may be damaged during normal use (see fig. 5), for example by backing into loading docks. However, only certain roadside inspections—which are performed at random or if an officer suspects a problem—specifically require the rear guard to be inspected. Specifically, of the eight types of roadside inspections, representatives of the Commercial Vehicle Safety Alliance (CVSA)—which helps develop roadside inspection standards— told us that four require the rear guard to be inspected. Stakeholders we interviewed told us that a trailer could go its entire lifecycle—estimated as typically 10 to 15 years—without ever being selected for a roadside inspection. FMCSA data show that although rear guard violations may be identified during roadside inspections, they constitute a small percentage of all violations. For example, out of about 5.8 million violations identified during roadside inspections in 2017, approximately 2,400, or 0.042 percent, were rear guard violations. In an effort to learn more about rear guard violations, CVSA encouraged commercial vehicle inspectors to specifically focus on rear guards during their roadside inspections performed from August 27 through 31, 2018. According to these data, for the more than 10,000 trailers inspected during that 5-day time frame, about 900 violations (about 28 percent of all violations identified) for rear guard dimensional or structural requirements were identified, including almost 500 instances where the rear guard was cracked or broken, or missing altogether. A CVSA representative stated there was a greater percentage of violations identified because inspectors were asked to specifically focus on the rear guard during this effort. Inspectors performing annual inspections—which can include employees of the motor carrier—rely on a checklist established in FMCSA regulations, known as “Appendix G.” This appendix specifies what equipment must be inspected, such as the brake system, lighting, and wheels. Appendix G does not list the rear guard as an item to be inspected. In August 2018, CVSA petitioned FMCSA to amend Appendix G to include rear guards as an item to be inspected. According to CVSA, in September 2018, FMCSA provided acknowledgment of its intent to review CVSA’s petition. FMCSA’s regulations, including those regarding commercial vehicle inspections, help the agency achieve its safety mission of reducing crashes, injuries, and fatalities. Further, Standards for Internal Control in the Federal Government notes that management should use quality information to achieve the entity’s objectives. Prior to receiving CVSA’s petition to amend Appendix G, FMCSA officials told us that not including rear guards in Appendix G does not affect commercial vehicle safety, as FMCSA regulations require all parts and accessories specified within the regulations—which includes the rear guard—to be in safe and proper operating condition at all times. According to DOT, the agency does not believe that motor carriers are ignoring the application of these regulations to rear guards. However, without explicitly including the inspection of the rear guard in Appendix G, there is no assurance that rear guards in operation will be inspected at least annually to ensure they perform as designed to prevent or mitigate an underride crash. This omission potentially affects FMCSA’s safety mission to help ensure the safe operation of tractor-trailers on the nation’s highways. Side Underride Guards Are Being Developed, but Limited Information Exists to Assess Overall Effectiveness and Cost While not currently required in the U.S., crashworthy side underride guards are being developed which could entail both costs and benefits to society. For example, there is currently one IIHS-crash-tested aftermarket manufacturer of side underride guards in North America, which has sold about 100 sets of side underride guards. According to the manufacturer, the cost of the guards starts at about $2,500 per trailer, though the price could decrease in the future as the manufacturing process becomes more efficient and greater quantities are built and sold. These side underride guards have been crash-tested by IIHS and successfully prevented underride crashes in tests at 35 and 40 miles per hour. As a result, the benefits of such guards might include a reduction in the number of fatalities in underride crashes. The manufacturer estimated that more widespread use of side underride guards would occur over the next 3 to 5 years. However, the manufacturer also said that more information on how side underride guards might affect everyday operations is needed before more widespread adoption by the industry. Additionally, some trailer manufacturers told us that they are in the process of developing side underride guards, but none are currently available for purchase. For example, a representative from one trailer manufacturer developing its own side underride guards estimated that it would be feasible to have these guards designed, tested, and available for sale within the next 2 years. However, the representative said that the manufacturer is hesitant to invest additional resources because of uncertainty about potential future regulatory requirements. Specifically, the manufacturer does not want to invest additional resources to develop a side underride guard that might later have to be redesigned to meet federal requirements, if such requirements were to be established and to differ from the manufacturer’s design specifications. Representatives from several trailer manufacturers, trucking industry organizations, and police departments we spoke with cited challenges with the use of side underride guards that would need to be addressed prior to widespread adoption by the industry. Officials from Canada and the European Union—which also do not require the use of side underride guards that can withstand the force of a vehicle crash—noted similar challenges. Weight: According to the aftermarket side underride guard manufacturer, the side underride guards currently available for sale weigh between 575 to 800 pounds in total. Representatives from two trucking industry organizations we spoke with stated that the additional weight from side underride guards may require carriers to put more trailers on the roads to ship goods in order to stay under federal maximum weight restrictions (generally 80,000 pounds). Federal regulations allow for certain exemptions in the federal weight limits, such as for auxiliary batteries. Some stakeholders also stated that the additional weight from side underride guards would increase fuel costs (assuming all else remains the same) and could put stress on the trailer’s frame, reducing its lifespan and potentially increasing maintenance costs. Road clearance: Some stakeholders we interviewed—including two trucking industry organizations, a tractor-trailer fleet operator, and a trailer manufacturer—stated that side underride guards limit a trailer’s clearance from the ground, which could limit the geographic locations that could be serviced by a trailer or—if the guards drag along the ground—result in damage to the guards or even the trailer. Conditions involving limited clearance could include traveling over raised railroad crossings or navigating sloped loading docks. While aerodynamic side skirts may also drag along the ground in similar conditions, they are more flexible than side underride guards and less likely to damage the trailer. Effects on under-trailer equipment and access: Installation of a side underride guard may limit access to or displace equipment currently underneath a trailer, including spare tires, fuel tanks, and aerodynamic side skirts. Additionally, the rear axles of some trailers can be adjusted to evenly distribute the weight of the trailer’s cargo. For example, trailer manufacturers told us that when the axle is moved to the furthest rear position of the trailer, a fixed-length side underride guard could leave a gap large enough for a car to still have an underride crash. Further, some police officers we interviewed told us that it could be challenging to perform roadside inspections of trailers equipped with side underride guards because the guards could limit access to the underside of the trailer. Representatives from three trucking industry organizations we spoke with indicated that crash avoidance technologies may be more effective than underride guards at minimizing underride crashes, including side underride crashes. However, while these technologies have the potential to mitigate crashes, it is unlikely that they will be available on a more widespread scale in a time frame soon enough to render underride guards unnecessary. While automatic braking systems for passenger vehicles are to become a standard feature on newly built vehicles starting in 2022, IIHS representatives told us that these systems are less effective at detecting and mitigating side crashes than rear or frontal crashes. Specifically, the representatives stated that automatic braking systems would not be effective in situations where the passenger vehicle impacts the side of a trailer at an oblique angle rather than at a perpendicular angle. According to stakeholders we interviewed, it will take a considerable amount of time for the passenger fleet to adopt automated vehicle technologies, with some stating that there will be a mix of automated and non-automated technologies on the nation’s highways for decades—longer than the 3 to 5 years estimated by the side underride guard manufacturer for more widespread use of these guards. NHTSA recently issued a study on the safety performance of certain materials used for side underride guards. However, NHTSA has not performed research on the overall effectiveness and costs associated with or the design of side underride guards. NHTSA’s mission is to “save lives, prevent injuries and reduce economic costs due to road traffic crashes, through education, research, safety standards and enforcement activity.” Additionally, a statement of federal principles on regulatory planning and review indicates that in deciding whether and how to regulate, agencies should assess all costs and benefits of available alternatives, including the alternative of not regulating, and that the agency should base its decisions on the best reasonably obtainable scientific, technical, economic, and other information. Additional research on the effectiveness and cost associated with side underride guards could better position NHTSA to determine whether these guards should be required and, if so, appropriate standards for their implementation. Such research may also help provide information to address the challenges stakeholders cited with side underride guards. Stakeholders Generally Agreed That North American Tractor Designs May Mitigate the Need for Front Guards for Underride or Override Purposes In general, there are two types of tractors used in tractor-trailer combinations: conventional tractors, wherein the tractor is lower to the ground and the engine is in front of the cab where the driver sits, and “cab-over” tractors, which are designed so the driver sits atop the engine (see fig. 6). Conventional tractors are generally used in North America, whereas cab-over tractors are used more frequently in the European Union. Since 2000, the European Union has required tractors to include front guards to improve the protection of passengers in cars involved in head- on collisions with tractors. These guards are designed to lower the front profile of a cab-over tractor to be more compatible with that of a passenger vehicle to reduce the potential for underride or override, and to help absorb the force of a collision. Some conceptual designs for front guards on conventional tractors have been proposed by researchers in the U.S., but there are no designs available for purchase or installation as there are for side underride guards. Some research organizations have developed computer models of front guards, but these guards have not been produced for U.S. tractor configurations. Representatives from three trucking associations we spoke with stated that their members were not researching, producing, or installing front guards. A government official from Canada—where the conventional tractor design is also commonly used—said that they did not know of any tractor manufacturers or truck fleets that use front guards. Representatives from a tractor manufacturer that operates in both the U.S. and the European Union told us that front guard designs currently used in the European Union would not be compatible with conventional tractors used in the U.S., stating that these guards would need to be installed in the same space that the bumper, frame, and some equipment—including crash avoidance technologies—already occupy. The design of conventional tractors may mitigate the need for front guards for underride or override purposes, as the lower bumpers and frame make the height of conventional tractors more compatible with passenger cars. A 2013 NHTSA study found that tractors with lower bumper heights were less likely to be involved in an override crash than those with higher bumper heights. Government officials from the European Union told us that they did not see the need for conventional tractors to have front guards, since the lower bumpers essentially function as guards in frontal crashes. Officials from a state DOT, a state police department, and a local police department all stated that they do not see the need for front guards because the tractor is already so low to the ground. Further, state and local officials we spoke with noted that the front underride crashes they have seen often occurred at higher speeds, such as when a truck fails to stop for congested traffic or in a head-on collision at higher speeds. In these cases, the speed combined with the much greater weight of the truck could cause the truck to override the car (in the first scenario) or the car to underride the tractor (in a head-on collision). According to these officials, the force of the crash at those speeds— regardless of whether there was underride or override—would very likely be unsurvivable. Additionally, automatic braking systems in tractors and passenger vehicles may further mitigate the need for front guards for underride or override purposes. These technologies—which, according to a tractor manufacturer we interviewed, have been available and installed in some tractors—can potentially stop a tractor from, for example, overriding a passenger vehicle by automatically applying brakes in situations where a potential rear-end collision is detected. Representatives from a tractor manufacturer told us that about 70 to 80 percent of all newly manufactured tractors it produced are equipped with these braking systems and estimated that more than 50 percent of newly built tractors sold by all manufacturers in the U.S. include these systems. Additionally, front guard researchers we spoke with told us that some front underride guard systems would be optimally effective when paired with automated technologies, such as automatic braking systems. While stakeholders generally agreed that North American tractor designs may mitigate the need for front guards for underride or override purposes, NTSB has called for greater use of front guards. Specifically, in 2010, NTSB recommended that NHTSA, among other things, develop performance standards for front guards and, after doing so, require all newly manufactured trucks weighing more than 10,000 pounds to install these front guards. NTSB issued these recommendations based on its investigation of a June 2009 multi-car crash on an Oklahoma interstate, in which the driver of a tractor trailer failed to slow down for traffic stopped on the roadway. NTSB reported that the tractor-trailer’s high impact speed and structural incompatibility with the passenger vehicles contributed to the severity of the crash. As of December 2018, NHTSA had not implemented NTSB’s recommendations. NHTSA reported to NTSB in 2014 that it was in the process of conducting further examination of crash data, but that efforts in developing standards for front guards are a secondary priority to upgrading rear guard standards. NTSB stated that NHTSA’s response was disappointing and that it continues to believe that NHTSA actions are needed to implement this recommendation. Additionally, NTSB recommended in 2015 that NHTSA develop performance standards and protocols for assessing forward collision avoidance systems in commercial vehicles, which could also help to stop a tractor from overriding a passenger vehicle. According to NTSB, although NHTSA has performed some research on this technology, NTSB has deemed NHTSA’s responses as unacceptable. NHTSA officials told us that the agency anticipates completing relevant research and testing in 2019 that would give the agency the information it needs to make appropriate decisions on next steps related to these NTSB recommendations. The Wide Variety of Single-Unit Truck Configurations Creates Challenges for Implementing Crashworthy Underride Guards FMCSA regulations require rear guards for certain single-unit trucks, such as delivery or dump trucks, that are more than 30 inches above the ground. However, according to representatives of the trucking industry we interviewed as well as NTSB, the wide variety of single-unit trucks makes it challenging to develop a one-size-fits-all requirement for underride guards. Single-unit trucks can vary widely with respect to weight, dimensions, and purpose and can include large pick-up trucks, fire trucks, and dump trucks. The FMCSA regulations exempt certain single-unit trucks—such as those already low to the ground—from the requirement to have a rear guard if the vehicle is constructed and maintained such that the body or other parts of the vehicle provide rear end protection comparable to rear guards required for other single-unit trucks. A trucking industry representative we spoke with said that his association was not aware of any manufacturers currently designing or planning to design crashworthy rear, side, or front underride guards for single-unit trucks due to the variability of single-unit truck design. Some U.S. cities, such as Boston, require pedestrian/cyclist side guards be installed on municipally owned single-unit trucks, but these guards are not designed to mitigate a passenger vehicle underride crash. Research shows that crashes involving single-unit trucks occur less often and are less likely to cause serious injuries and fatalities than those involving tractor-trailers. For example, a 2013 NTSB study of crash data from 2005 through 2009 found that single-unit truck crashes occurred less often, resulted in fewer fatalities, and were less likely to cause serious injuries than tractor-trailer crashes. NHTSA has also acknowledged that single-unit trucks represent the majority of the registered heavy vehicle fleet, but account for a lower percentage—27 percent—of rear end fatalities. To help address fatalities associated with underride crash fatalities involving single-unit trucks, as part of its 2013 study, NTSB recommended that NHTSA develop standards for crashworthy rear, side, and front guards for single-unit trucks, as well as devote efforts to crash avoidance technologies and include more variables in FARS to improve data collection. NTSB also noted that, because of the variability in vehicle design and cargo body styles, safety countermeasures for single-unit trucks would need to be adapted for different truck types to address technical challenges to their implementation. NHTSA published an ANPRM in 2015 that considered requiring rear guards with strength and energy absorption criteria for all newly built single-unit trucks. However, NHTSA subsequently found that the costs of this requirement outweighed the benefits. Comments on this ANPRM varied. For example, the American Trucking Associations stated that it believed NHTSA underestimated the costs associated with installing crashworthy rear guards for single-unit trucks. In contrast, IIHS, in its comments on the ANPRM, questioned NHTSA’s assumptions and stated that the agency was undervaluing the benefits and overestimating the costs. Specifically, IIHS noted that NHTSA overestimated the additional weight of the rear guards, thereby overestimating the cost by about 35 to 40 percent. IIHS also stated that due to concerns with the underlying data, NHTSA underestimated the number of crashes into the rear of single-unit trucks with passenger compartment intrusion. NHTSA officials told us that they disagreed with IIHS’s assessment and stated that the data NHTSA used in the ANPRM were valid and appropriate. The ANPRM also considered requiring single-unit trucks to install red and white retroreflective tape meant to increase the visibility of these trucks, especially in the dark. NHTSA found that this requirement would be cost- effective at preventing or mitigating crashes involving single-unit trucks. However, NHTSA has since withdrawn the ANPRM, stating that—based on the comments received as well as analysis of the petitions—the changes being considered were not justified. Conclusions The likely underreporting of underride crashes and fatalities due to variability in the data collection process limits NHTSA’s ability to accurately determine the frequency of such crashes. An underride field in MMUCC and additional information from NHTSA on how to identify and record these crashes would provide greater assurance that state and local police officers are accurately reporting data on underride crashes. Such reporting would, in turn, enable NHTSA to better identify and support measures—such as rulemakings and research efforts—to help address this issue. While the stronger rear guards being voluntarily implemented by the largest trailer manufacturers show promise in mitigating the potentially devastating effects of rear underride crashes, rear guards will only be effective if they are properly maintained and replaced when damaged. The lack of specific requirements that rear guards be inspected annually for defects or damage potentially affects the safety of the traveling public and FMCSA’s ability to achieve its safety mission. Finally, designs of crashworthy side underride guards show promise at mitigating underride crashes, but manufacturers may be reluctant to move forward with further development of these types of guards without information from NHTSA on the effectiveness, cost, and implementation standards for these devices. With additional research on resolving the challenges associated with side underride guards, these guards may be closer to being a feasible solution than automated driver assistance technologies designed to prevent or mitigate side impacts that could lead to an underride crash. Recommendations for Executive Action We are making the following four recommendations to DOT: The Administrator of the National Highway Traffic Safety Administration should recommend to the expert panel of the Model Minimum Uniform Crash Criteria to update the Criteria to provide a standardized definition of underride crashes and to include underride as a recommended data field. (Recommendation 1) The Administrator of the National Highway Traffic Safety Administration should provide information to state and local police departments on how to identify and record underride crashes. (Recommendation 2) The Administrator of the Federal Motor Carrier Safety Administration should revise Appendix G of the agency’s regulations to require that rear guards are inspected during commercial vehicle annual inspections. (Recommendation 3) The Administrator of the National Highway Traffic Safety Administration should conduct additional research on side underride guards to better understand the overall effectiveness and cost associated with these guards and, if warranted, develop standards for their implementation. (Recommendation 4) Agency Comments We provided a draft of this report to DOT for comment. In its written comments, reproduced in appendix II, DOT stated that it concurred with our recommendations. DOT also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Transportation, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our work for this report focused on truck underride crashes, and the U.S. Department of Transportation’s (DOT) efforts related to this issue. In particular, this report examines (1) the data DOT reports on underride crashes, and (2) the development and use of underride guard technologies in the U.S. For both objectives, we conducted a literature review to identify studies regarding truck safety, in general, and underride guards, in particular, published from 1970 through 2018. We conducted a search for relevant peer-reviewed articles, government reports, trade and industry articles, and think tank publications. Key terms included various combinations of “underride,” “crash,” “collision,” and “guard.” We included those studies that were methodologically sound and covered underride crash data, guard technologies, and benefits and costs relevant to our scope. Additionally, we interviewed and analyzed the perspectives of government officials from DOT, the National Highway Traffic Safety Administration (NHTSA), the Federal Motor Carrier Safety Administration (FMCSA), and the National Transportation Safety Board. We interviewed officials from foreign transportation agencies—Canada and the European Union—that were selected based on our review of literature identified above and recommendations from preliminary interviewees. We also interviewed a variety of relevant non-governmental organizations to gain their perspectives on topics related to underride crashes and guards. These organizations represent a variety of key players in their respective fields on underride crash-related topics. We grouped these entities into the following categories: (1) trailer manufacturers, (2) trucking industry organizations, (3) tractor-trailer fleets and related organizations, (4) traffic safety organizations, and (5) research organizations. We interviewed seven of the top eight trailer manufacturers in the United States, as identified by the Insurance Institute for Highway Safety. We requested an interview with Stoughton Trailers, but they declined to participate. The organizations we contacted as part of this work are listed at the end of this section. We also interviewed NHTSA officials and conducted semi- structured interviews with officials in five selected states, including officials in five state departments of transportation and five state and two local police departments to understand and identify limitations, if any, in how underride crash-related data are collected and analyzed. The results of these interviews are not generalizable to all states and localities; however, they offer examples of the types of experiences state DOTs and police have with underride crashes and inspections. We selected states based on several factors to identify states that were similar in highway traffic trends and large truck-related fatality rates, but collected underride crash data differently. Selection factors included highway vehicle miles traveled per state, total underride crash fatalities by state in 2016 as reported by NHTSA, and the presence of an underride crash data field on each state’s crash report form. Based on these factors, we selected and conducted interviews with state DOT and state police officials in California, Illinois, Indiana, Pennsylvania, and Tennessee. We also corresponded with officials from the Ohio DOT for clarification questions. We interviewed local police departments in Chicago, Illinois and Terre Haute, Indiana. To identify the data DOT reports on truck underride crashes, we analyzed existing DOT data on underride crashes and fatalities from 2008 through 2017, the 10 most recent years for which these data are available. We reviewed DOT documentation for policies and procedures on data collection and data reliability assessments for underride crash-related data. NHTSA fatality data came from the Fatality Analysis Reporting System (FARS). FARS is a census of all fatal traffic crashes in the United States that provides uniformly coded, national data on police-reported fatalities. We analyzed these data to determine the reported number of fatalities involving underride crashes. To assess the reliability of the FARS data, we reviewed relevant documentation and spoke with agency officials about the data’s quality control procedures. We determined that the data were sufficiently reliable for the purposes of this report, specifically to provide a high-level overview of underride crash fatalities within recent years. However, we did identify potential underreporting of underride crashes and fatalities, as discussed in this report. We also reviewed NHTSA’s annual Traffic Safety Facts reports—which use FARS data—to determine the annual number of traffic and large truck crash fatalities from 2008 to 2017, the 10 most recent years for which these data are available. We reviewed state crash report forms from all 50 states and the District of Columbia to understand the variability of underride crash-related data elements and how such variability could affect DOT’s data collection and analysis efforts. We compared NHTSA’s data collection efforts to federal internal control standards related to use of quality information. To describe the development and use of truck underride guard technologies in the United States, we reviewed research and documentation on underride guards. Primarily, we reviewed documents relating to underride guards from NHTSA and FMCSA, as well as information from traffic safety groups, trucking industry organizations, research organizations, and selected foreign transportation agencies. We reviewed NHTSA’s regulations requiring rear guards, FMCSA’s regulations requiring commercial vehicle inspections, DOT’s documentation on underride guard technologies, and DOT data on commercial vehicle inspections. To assess the reliability of DOT’s commercial vehicle inspection data, we reviewed relevant documentation and spoke with agency officials about the data’s quality control procedures. We determined that the data were sufficiently reliable for the purposes of this report, specifically to provide a high-level overview of commercial vehicle inspections within recent years. We compared DOT’s efforts to pertinent agency regulations on commercial vehicle inspections, federal internal control standards related to use of quality information, and a statement of federal principles on regulatory planning and review. We spoke with relevant non-governmental organizations to obtain their perspectives on the perceived benefits and costs of rear, side, and front underride guards, and the potential factors that may influence the benefits and costs. We conducted this performance audit from January 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Organizations Contacted Appendix II: Comments from the Department of Transportation Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Susan Fleming, (202) 512-2834 or flemings@gao.gov. Staff Acknowledgments In addition to the contact named above, Sara Vermillion (Assistant Director); Daniel Paepke (Analyst in Charge); Carl Barden; Jessica Du; Mary Edgerton; Timothy Guinane; David Hooper; Gina Hoover; Madhav Panwar; Joshua Parr; Malika Rice; Oliver Richard; Matthew Rosenberg; Pamela Snedden; and Michelle Weathers made key contributions to this report.
Why GAO Did This Study Truck underride crashes are collisions in which a car slides under the body of a truck—such as a tractor-trailer or single-unit truck—due to the height difference between the vehicles. During these crashes, the trailer or truck may intrude into the passenger compartment, leading to severe injuries or fatalities. Current federal regulations require trailers to have rear guards that can withstand the force of a crash, whereas the rear guards required for single-unit trucks do not have to be designed to withstand a crash. There are no federal side or front underride guard requirements. GAO was asked to review data on truck underride crashes and information on underride guards. This report examines (1) the data DOT reports on underride crashes and (2) the development and use of underride guard technologies in the U.S. GAO analyzed DOT's underride crash data for 2008 through 2017; reviewed NHTSA's proposed regulations and research on new guard technologies; and interviewed stakeholders, including DOT officials, industry and safety groups, and state officials selected based on reported underride crash fatalities and other factors. What GAO Found According to crash data collected by police and reported by the Department of Transportation's (DOT) National Highway Traffic Safety Administration (NHTSA), fatalities from “underride” crashes, such as those pictured below, represent a small percentage of all traffic fatalities. From 2008 through 2017, an average of about 219 fatalities from underride crashes involving large trucks were reported annually, representing less than 1 percent of total traffic fatalities over that time frame. However, these fatalities are likely underreported due to variability in state and local data collection. For example, police officers responding to a crash do not use a standard definition of an underride crash and states' crash report forms vary, with some not including a field for collecting underride data. Further, police officers receive limited information on how to identify and record underride crashes. As a result, NHTSA may not have accurate data to support efforts to reduce traffic fatalities. Underride guards are in varying stages of development, and gaps exist in inspection of rear guards in current use and in research efforts for side guards. NHTSA has proposed strengthening rear guard requirements for trailers (the rear unit of a tractor-trailer) and estimates about 95 percent of all newly manufactured trailers already meet the stronger requirements. Although tractor-trailers are inspected, Federal Motor Carrier Safety Administration annual inspection regulations do not require the rear guard to be inspected, so damaged guards that could fail in a crash may be on the roadways. Side underride guards are being developed, but stakeholders GAO interviewed identified challenges to their use, such as the stress on trailer frames due to the additional weight. NHTSA has not determined the effectiveness and cost of these guards, but manufacturers told GAO they are unlikely to move forward with development without such research. Based on a 2009 crash investigation, the National Transportation Safety Board (NTSB) recommended that NHTSA require front guards on tractors. NHTSA officials stated that the agency plans to complete research to respond to this recommendation in 2019. However, stakeholders generally stated that the bumper and lower frame of tractors typically used in the U.S. may mitigate the need for front guards for underride purposes. Regarding single-unit trucks , such as dump trucks, NTSB has recommended that NHTSA develop standards for underride guards for these trucks, but the agency has concluded these standards would not be cost-effective. What GAO Recommends GAO recommends that DOT take steps to provide a standardized definition of underride crashes and data fields, share information with police departments on identifying underride crashes, establish annual inspection requirements for rear guards, and conduct additional research on side underride guards. DOT concurred with GAO's recommendations.
gao_GAO-20-358
gao_GAO-20-358_0
Background BIE Schools and the Federal Government’s Trust Responsibility BIE’s Indian education programs derive from the federal government’s trust responsibility to Indian tribes, a responsibility established in federal statutes, treaties, court decisions, and executive actions. In 2016, the Indian Trust Asset Reform Act included congressional findings stating “through treaties, statutes, and historical relations with Indian tribes, the United States has undertaken a unique trust responsibility to protect and support Indian tribes and Indians...” In addition, “the fiduciary responsibilities of the United States to Indians also are founded in part on specific commitments made in treaties and agreements securing peace, in exchange for which Indians surrendered claims to vast tracts of land…” It is the federal government’s policy to fulfill its trust relationship with and responsibility to the Indian people for the education of Indian children by working with tribes toward the goal of ensuring that Interior- funded schools are of the highest quality and provide for the basic elementary and secondary educational needs of Indian children, including meeting the unique educational and cultural needs of these children. Students with Disabilities in the BIE School System Similar to students in elementary and secondary schools nationwide, some students in BIE schools have documented disabilities that require special educational or supplemental support. More than 6,000 students with disabilities, representing about 15 percent of total enrollment, attend BIE schools. Specific learning disabilities, such as perceptual disabilities, dyslexia, or impairments from brain injury, formed the most prevalent disability category among BIE students with disabilities in school year 2017-2018 (see table 1), affecting more than half of the students with disabilities at BIE schools. Individualized Education Program An IEP is a written statement for each child with a disability designed to meet the child’s individual needs under IDEA. IDEA requires that every child who receives special education services have an IEP. Before an IEP is developed, a child with a disability must be identified, located, and evaluated through a process known as Child Find. Generally, an adult familiar with the student’s abilities makes an official referral for a special education services evaluation. With parental consent, the student is then evaluated using a variety of assessment tools and strategies designed to help determine the student’s unique needs. Once a child is evaluated and determined to be eligible for special education and related services under IDEA, an IEP is developed describing the school’s delivery of required services to the child. IDEA regulations require that the services specified in a child’s IEP be provided to the child as soon as possible following the development of the IEP. Moreover, IDEA requires that a student’s IEP include, among other things, a projected date for the beginning of services and the anticipated frequency, location, and duration of those services. However, IDEA does not specifically address the steps that schools must take in cases where services are not provided in accordance with the anticipated service duration and frequency in the student’s IEP, such as cases where services were not provided at all or the duration was less than the amount of time specified in a student’s IEP. Educators are required to track the child’s academic progress over the school year and then annually review and update the IEP as needed at least once a year. IDEA requires schools to reevaluate children with IEPs at least once every 3 years to determine whether their needs have changed and if they still qualify for special education and related services under IDEA (see fig. 1). Under IDEA, Interior receives funding to assist in the education of children with disabilities in BIE schools. BIE is responsible for meeting all IDEA requirements for these children, including that an IEP is developed and implemented for each eligible student and that the requirements of any identified education and related services are defined in the IEP. BIE policy requires that IEPs identify services for eligible students under two main categories: education services and related services. Education services include math, reading, and written expression, among others, while related services include occupational therapy, physical therapy, and speech-language pathology, among others, according to BIE’s policy. BIE also requires that IEPs include the type of provider for these services, such as a special education teacher for an education service, or a physical therapist for a related service, as well as information about the duration and frequency of the services to be provided (see fig. 2). BIE schools are required to develop and update students’ IEPs in the Native American Student Information System (NASIS), an online data management system the agency created in 2006 for all BIE schools to record and store a variety of student-related information, including special education data. BIE requires that schools document the special education and related services that their teachers or contracted providers deliver to students with IEPs, and Interior regulations require that schools maintain these and all other special education records for at least 4 years. BIE Offices Responsible for Overseeing and Supporting Special Education at Schools Multiple offices under the BIE Director are responsible for overseeing and supporting schools’ special education programs to help ensure that they comply with IDEA and other federal requirements for special education (see fig. 3). The School Operations Division was established under the bureau’s recent reorganization to provide direction and assistance to BIE schools in education technology; human resources; communications; educational facilities; safety and facilities; and acquisition and grants. The division is also responsible for providing oversight over BIE school spending, including spending on special education. Sixteen agency field offices called Education Resource Centers are located across the BIE school system and are administered by three separate BIE divisions under the Chief Academic Officer: the Associate Deputy Director-Tribally Controlled Schools, the Associate Deputy Director-Bureau Operated Schools, and the Associate Deputy Director-Navajo Schools. The Centers are primarily responsible for providing oversight and technical assistance to schools in a variety of areas, including their academic programs, fiscal management, and compliance with IDEA. In particular, Interior regulations and BIE procedures require that BIE annually verify that all students with an IEP in the BIE system are provided with special education services in accordance with their IEPs. BIE’s Division of Performance and Accountability (DPA) is primarily responsible for overseeing Education-funded programs at BIE schools, including IDEA and Title I, Part A of the Elementary and Secondary Education Act of 1965, as amended. DPA’s primary oversight responsibilities involve monitoring schools’ implementation of these federal education programs. DPA also provides schools and other BIE offices with technical assistance and training on IDEA requirements, among other program areas. Since 2018, DPA and other BIE divisions have been responsible for working together in monitoring schools the agency considers high risk in administering federal education programs. Specifically, in May 2018, BIE established a new policy and guidance for conducting annual targeted, risk-based monitoring of BIE school programs, which is separate from the requirements for the agency to verify the provision of special education and related services annually. According to this policy, BIE is required to select a sample of 15 schools for this monitoring based on a variety of special education and other risk factors, including special education enrollment and unobligated IDEA funds. BIE’s policy requires that staff from five of its divisions—DPA, School Operations, and the three school divisions responsible for directly supporting BIE schools—coordinate and conduct joint monitoring activities as teams, including a review of schools’ special education programs. These teams are required to issue in depth monitoring reports and technical assistance plans to schools within 30 days of an on-site monitoring visit. Role of Education’s Office of Special Education Programs Education’s Office of Special Education Programs (OSEP) awards funds to states and BIE, and provides assistance and oversight in their implementation of IDEA. BIE, as with states, is required to report certain compliance information to OSEP. OSEP, in turn, determines BIE’s performance and level of compliance with IDEA and provides assistance to BIE to improve in specific areas. Over the past 8 years, OSEP has found significant problems with BIE’s implementation of IDEA, which in 2019 prompted OSEP to withhold a portion of BIE’s IDEA funds. OSEP issued a determination letter in July 2019 that stated BIE needed intervention in implementing the requirements of IDEA because of its long-standing noncompliance and repeated failure to follow through on OSEP’s required corrective actions, among other issues. BIE had been in “needs intervention” status for each of the last 8 years. As a result of BIE’s continued noncompliance, OSEP in July 2019 withheld 20 percent, or about $780,000, of BIE’s fiscal year 2019 IDEA Part B funds reserved for administrative costs, an action OSEP has taken very infrequently. OSEP provided BIE notice and an opportunity for a hearing, but BIE did not appeal the withheld funds. OSEP’s activities in overseeing BIE’s implementation of IDEA included investigating special education services at one BIE school in 2018. As a result of the investigation, in early August 2018, OSEP sent a letter to the BIE Director about its findings, including that some students at one BIE- operated school had not received services required in their IEPs, including speech language therapy and physical therapy, for almost a year because service contracts with providers had expired. The letter notified BIE that failure to provide services in a student’s IEP violated the IDEA requirement that a free appropriate public education be made available to all eligible students with disabilities. OSEP’s investigation also determined that six other BIE-operated schools were under the same contracts and may not have delivered IEP-required services to students. OSEP’s August 2018 letter required BIE to take several corrective actions within 30 days, including determining whether other schools had IEP service disruptions. In addition, the letter required that BIE develop a plan by the end of October 2018 to prevent contractual problems that could result in a similar disruption of services in the future. As of February 2020, BIE had not notified OSEP that it had completed those corrective actions. OSEP’s oversight of BIE also included visiting BIE schools and agency offices in spring 2019 to examine BIE’s accountability system for IDEA. OSEP presented its findings and corrective actions to BIE in a letter and monitoring report in October 2019. OSEP found that BIE did not have policies and procedures for implementation of IDEA Part B at its schools, and that school officials wanted guidance on IDEA requirements from BIE. OSEP also found evidence of a systemic problem with service providers. For example, officials that OSEP interviewed at one school OSEP visited said they had not had a physical therapist during the entire 2018-2019 school year and did not have a school counselor the previous year. Such staff were required in order to provide services in accordance with students’ IEPs. The corrective actions detailed by OSEP in its October 2019 letter to BIE were to be completed within 90 days, including that BIE develop a plan and timeline for adopting policies and procedures for implementing IDEA. The bureau, however, requested a 60-day extension, which OSEP granted, moving the required date of completion for BIE’s actions to early spring 2020. Prior GAO Work on Indian Education Our prior work on Indian education found numerous weaknesses in BIE’s management and oversight of BIE schools, including problems with monitoring school spending and conducting annual safety and health inspections of school facilities. As a result of these and other systemic problems with BIE’s administration of Indian education programs, we added Indian education to our High Risk List in February 2017. In our 2019 High Risk update, we found that BIE had made progress in addressing some of these key weaknesses in Indian education, such as demonstrating leadership commitment to change. We reported, however, that the agency needed to show progress in other key areas, such as increasing its capacity to support functions related to administering and overseeing BIE schools. BIE Schools Did Not Provide or Did Not Account for Almost 40 Percent of Students’ Special Education Service Time, According to School Documentation BIE schools did not provide an estimated 20 percent of special education service time to their students during a 4-month period between October 2017 and February 2018, and they did not provide documentation for another 18 percent of service time. Schools frequently did not include reasons for missing services in their service logs, and their practices for whether to make up these services varied. Further, some schools provided no documentation for one or more services, while many schools provided documentation that lacked key information. Difficulties in identifying special education and related service providers, especially in remote areas, limited some schools’ ability to provide services to students. Schools Did Not Provide Students with an Estimated 20 Percent of Special Education Service Time and Did Not Account for Another 18 Percent We estimate that BIE schools either did not provide or did not account for 38 percent of the time for the special education and related services required by students’ IEPs, according to our analysis of school documentation. Specifically, we found that schools provided an estimated 62 percent of the service time specified in their students’ IEPs (see fig. 4). Of the service time remaining, we found that schools did not provide an estimated 20 percent of service time to students, and they did not provide any documentation for an additional 18 percent of such service time. When schools did not provide documentation, we were unable to determine whether services were delivered to students. Our analysis was based on a review of service logs at 30 BIE schools during a 4-month period between October 2017 and February 2018 for a nationally representative sample of students with IEPs. Of the students who clearly did not receive service time, according to school service logs, three students at one school received no service time at all during the period of our 4-month review. Officials at the school told us that the special education teacher responsible for providing these services did not fulfill her responsibility to provide services to these students and eventually left the school. They added that the school did not have other qualified staff to provide the services during the period of our review. School Documentation Frequently Did Not Include Reasons for Missing Services, and Schools’ Practices for Whether to Make Up Services Varied Our analysis of school service logs found that an estimated one-quarter of the services that were missed did not have a reason listed in the logs, and as a result, we could not determine why the service was not delivered. Of the remaining estimated three-quarters of services that were missed, the top three reasons for missed services were student absences, school-sponsored activities (such as field trips), and provider absences (see fig. 5). BIE requirements do not specify that school service logs must include reasons for missed services. We also found that the schools in our sample did not follow consistent practices for whether to make up regularly scheduled services that are missed. Based on our outreach to officials at the schools in our sample, 23 of the 30 schools that responded varied in their practices for whether to make up services that were missed for reasons including school- sponsored activities or unplanned school closures, such as snow days (see fig. 6). In addition to information about their practices for whether missed special education services are expected to be made up, school officials also provided us with written responses about other factors that may influence this decision. For example, an official at one school responded that while providers of related services are expected to make up missed services when providers are absent, education service providers are not. Alternatively, an official at another school responded that all of the school’s service providers are responsible for finding a way to provide the IEP-required services regardless of the reason for missed service. Additionally, we found that for schools that expect providers to make up missed services, timeframes for doing so varied considerably, based on written responses we received from schools. Specifically, while some school providers typically make up services within a week of the missed service, others aimed to provide them within a month or longer. One school official responded that related services—such as occupational therapy, physical therapy, and speech and language—may not be made up until the following summer, which could potentially result in a delay of up to 9 months if services are missed at the beginning of the school year. BIE does not have official requirements on whether and when schools should make up missed services, and BIE officials provided schools with inconsistent information on this issue. For example, information provided to us by BIE’s Division of Performance and Accountability (DPA) shows that officials advised schools on one occasion that making up missed services is required only when they occur because the provider is not available, but on another occasion advised schools that all missed services should be made up. Further, one official in another BIE office that oversees and supports tribally controlled schools advised schools that making up services is not expected when they are missed due to school-sponsored activities or testing. In contrast, another official with the same division advised schools that services should always be made up regardless of the reason they were missed. While IDEA does not specifically address the steps that schools must take in cases where services are not provided in accordance with the service duration and frequency in the student’s IEP, Education officials said that IDEA does not preclude state educational agencies—including BIE—from establishing their own requirements in this area, as long as they are consistent with IDEA requirements. We found that at least four state educational agencies, including Maryland, New York, North Dakota, and the District of Columbia, have done so. IDEA requires that schools provide special education and related services to eligible students as outlined in their IEPs. However, because BIE schools follow inconsistent practices for whether to make up services when missed, and BIE has not established consistent requirements in this area, there is a risk that some schools may not be providing services in accordance with students’ IEPs. As noted previously, we found that schools did not provide or did not document almost an estimated 40 percent of students’ service time, based on our review of service logs. Missed services may delay students’ progress and increase the risk that they are not receiving a free appropriate public education as required under IDEA. Some Schools Provided No Documentation for One or More Services, While Many Schools Provided Documentation That Lacked Key Information In our generalizable analysis of service logs, we found that for an estimated 18 percent of service time, schools were not able to show whether education and related services were provided to students with IEPs because school service logs were either missing or incomplete. No service logs were provided by schools for 12 of the 138 students in our sample, and incomplete logs were provided for another 51 of the students. By school, 6 of the 30 schools in our sample did not provide any logs for at least one student, and 18 of the remaining 24 schools were missing a portion of the logs. The lack of service logs prevented us from determining whether some students were provided their required IEP services. In addition, we found that many schools’ service logs lacked key information. In particular, service logs frequently omitted or did not clearly indicate service duration and frequency. This information is important for determining whether a school has provided services in accordance with a student’s IEP. Key areas in which service logs varied included: Service duration and frequency: IEPs are required by BIE to specify the weekly frequency and duration of the services throughout the year. However, the service logs we reviewed often did not include both types of information. About one-quarter of the service log entries did not indicate the number of minutes provided, according to our statistical analysis. We estimate that about one-fifth included total minutes, but did not clarify how many times the service was provided. Just over half of the service log entries included both the duration and frequency of each service. Individual vs. combined service entries: Eleven of the 30 schools in our sample provided us with service logs that grouped multiple services together without indicating the specific amount of time or the number of sessions for each service per week. As a result, when these schools recorded that less time was provided, we were unable to identify which of the services were missed. For example, one student was to receive five 60-minute sessions each of reading, written expression, and math per week, according to the student’s IEP. The student’s service log recorded the total number of minutes provided in a day but did not specify which services were provided (e.g., 540 minutes were provided, of a total 900 minutes per week). Based on this information, we could infer from the shortage of total minutes provided that some services were missed, but we were unable to determine whether the student missed reading, written expression, math, or a combination of all three services. School officials responsible for completing service log: Service logs were completed by different types of staff across schools, including paraprofessionals, service providers, or special education coordinators. School practices in documenting special education services varied widely because BIE has not established a standardized process for doing so. BIE officials told us the agency is currently developing a system to standardize how schools document services using a new online module within the Native American Student Information System. Officials provided documentation showing that they were developing this system to allow schools to consistently document both education and related services. BIE’s system, once fully implemented, may allow the agency to monitor and verify service provision more effectively and improve the consistency of schools’ documentation of services. BIE plans to fully implement the system and provide schools with the requisite training to use it by late 2020, according to agency documentation. Difficulties in Identifying Special Education Providers, Especially in Remote Areas, Limited Some Schools’ Ability to Provide Services to Students Officials at 22 of the 30 schools in our sample provided us with information in addition to their service logs, and all 22 schools reported difficulties in recruiting, hiring, or retaining staff or contractors with the required qualifications to provide special education and related services, which some said limited their ability to provide students with high quality required services. In written responses and interviews we conducted, school officials cited school size and remote location as constraints to recruiting, hiring, or retaining qualified service providers. In particular, while schools often rely on contractors to provide related services—such as occupational and physical therapy—officials at 10 of the 30 schools in our sample reported that the availability of qualified contractors was limited. Education services, which are typically provided by school special education staff, were required for nearly all students with IEPs in our sample. Some school officials said in interviews and written responses that in some cases students did not receive education services because their schools either did not have any or enough qualified staff to provide them. For example, according to a BIE official, one BIE school reassigned its only special education teacher to fill a vacant science teacher position and did not provide required services to 18 students during the 2018-19 school year. In another example, one school reported that it did not have qualified staff to provide services to two students with IEPs for 12 consecutive weeks during the 2017-2018 school year. Officials said the school was unable to find a substitute special education teacher, and as a result, each student missed about 5 hours of service time per week during this period. An official at another school said that after advertising for a special education teacher for three years, the position is still vacant. These examples illustrate challenges with hiring and retaining special education staff that may exist more broadly across the country. For example, according to recent Education data, 43 states reported shortages in special education providers in the 2018-2019 school year. However, promising practices may be found within the BIE system as well as across the states that could provide BIE schools direction in addressing shortages of special education providers. For example, two BIE schools recruited and hired special education staff through international work exchange programs meant to facilitate the employment of qualified teachers from other countries. Some schools also reported using outreach to other local BIE or public schools to find and share contractors. Further, OSEP has developed resources for addressing special education teacher shortages that it has made available to states and school districts. In particular, in 2019 OSEP hosted a series of online symposia on general strategies and best practices for schools to attract and retain effective special education personnel. These sessions featured experts and practitioners who discussed strategies for attracting and retaining personnel. Such strategies and other relevant state and tribal resources for addressing special education teacher shortages could provide BIE with additional support to address its own challenges in this area. BIE has not taken steps, however, to establish a mechanism, such as a community of practice, to identify and communicate promising practices for schools, especially those in remote locations, to address their special education staffing and contracting challenges. BIE’s advisory committee on special education stated in its 2018 annual report that BIE needed to better support the recruitment of special education and related service providers at BIE schools. Further, BIE’s 2018-2023 strategic plan has a goal of supporting schools by identifying and sharing best practices and collaborating with schools to recruit, hire, and retain highly effective staff. In addition, federal standards for internal control state that agencies should select an appropriate mechanism for communicating externally. Without greater support from BIE, some schools will continue to struggle to find the special education staff and contractors they need, and students at these schools may not receive the special education services they need to thrive academically. Limited Monitoring and Technical Assistance Hampered BIE’s Oversight and Support for Special Education at Schools Limited monitoring and technical assistance have hampered BIE’s oversight and support for special education at BIE schools. BIE did not verify the provision of special education and related services at about 30 percent of its schools in school year 2018-2019 due to limited oversight by its largest division. Additionally, BIE has not provided high-risk schools with timely reports after monitoring visits so schools can address their noncompliance with IDEA requirements. Further, staff in BIE’s Education Resource Centers often lack expertise in special education, and school personnel did not always know which agency staff to contact for special education support. BIE Did Not Verify the Provision of Services at About 30 Percent of Its Schools in School Year 2018-2019 Due to Limited Oversight by Its Largest Division BIE did not verify the provision of special education and related services at about 30 percent of its schools in school year 2018-2019, according to available agency documentation. Interior regulations, however, require that BIE annually review all schools’ documentation to verify the provision of special education and related services for every eligible student, among other things. BIE’s guidance for conducting these reviews specifically directs reviewing personnel to verify that students with active IEPs are receiving timely services as indicated on their IEPs. However, the BIE division that oversees about half of all BIE schools, which is led by the Associate Deputy Director-Tribally Controlled Schools, established a policy for its staff to verify provision of services at only a third of its assigned schools each year. The two other divisions, which oversee BIE-operated and Navajo schools, respectively, reported that they conducted reviews at 100 percent of their schools in school year 2018-2019. The Associate Deputy Director-Tribally Controlled Schools who authorized this policy, told us that she believed the policy complied with Interior regulations. However, Interior’s Office of the Solicitor told us that this policy does not comply with Interior’s regulations. BIE officials said the Office of the Associate Deputy Director-Tribally Controlled Schools established this policy to reduce the number of schools the division annually verifies because of the division’s limited staff capacity. Six of 13 staff positions in this division with roles in overseeing or supporting special education were vacant as of February 2020, according to BIE documentation and a senior official. Although BIE developed a strategic workforce plan in 2019 with a goal of addressing staffing shortages across the bureau, the plan does not include information on a strategy or timeframe to address vacancies in positions with responsibilities to oversee and support special education at its schools. BIE’s verification of special education and related services at schools has identified noncompliance with federal requirements. For example, according to BIE, a recent verification visit at one school identified numerous irregularities in its special education documentation, which prompted the school’s superintendent to request that BIE conduct a formal investigation. BIE investigators reported that school staff had falsified service records showing that services were provided when a teacher was not present, and that services were provided in multiple and inappropriate settings (e.g., math services recorded at the same time and date during reading, physical education, and science periods), among other things. As a result, BIE required several corrective actions from the school. As this example illustrates, the verification process provides BIE with an important oversight mechanism. This mechanism, however, is not being fully utilized by BIE’s largest school division. Without BIE annually reviewing documentation to verify the provision of special education for every student at all schools, the agency cannot ensure that students are receiving the services required by their IEPs. BIE Has Not Provided High-Risk Schools with Timely Reports to Address Their Noncompliance with IDEA BIE monitored 14 schools for high-risk monitoring in school year 2018- 2019, but did not provide the schools with timely monitoring reports and technical assistance plans for their compliance with IDEA and other federal education program requirements. In addition to its annual process of verifying that students with IEPs are receiving required special education and related services, BIE also conducts targeted oversight of schools it deems high risk. BIE’s high-risk monitoring policy, established in May 2018, requires that it select a sample of schools based on risk indicators related to IDEA and other federal education programs, and provide schools with in-depth monitoring of their special education and other education programs. Nine of the 15 schools selected for BIE’s 2018-2019 high-risk monitoring were selected because BIE considered them to be at a higher risk in administering special education. The factors that BIE considered included a large enrollment of students with IEPs and a significant amount of unobligated IDEA funds, among other factors. One school, for example, had not obligated about 50 percent of its IDEA funds within the timeframe required by IDEA. BIE’s monitoring policy requires that it provide both monitoring reports and technical assistance plans to schools within 30 days of a visit. However, BIE sent schools visited in the 2018-2019 school year their monitoring reports in late August 2019—well after its required 30-day timeframe and several weeks after we requested the reports as part of this review. For example, BIE sent two school reports more than 8 months after its monitoring visits, and another two school reports more than 6 months after visits (see fig. 7). Further, the reports sent to schools were not accompanied by technical assistance plans, as required by BIE policy, which are required to outline how BIE will assist schools in addressing findings of noncompliance. BIE officials said that a timeframe for when the plans would be developed and issued to schools had not been established. BIE officials told us the late monitoring reports and the lack of technical assistance plans for schools resulted from BIE not fully implementing its 2018 high-risk monitoring policy. Officials said the monitoring policy requires monitoring teams to be comprised of staff from five BIE divisions: DPA, School Operations, and the three divisions responsible for directly supporting BIE schools. These staff work together to monitor special education and other school programs and develop reports and technical assistance plans for schools. However, BIE officials said that four of these divisions did not contribute staff to lead the monitoring teams, leaving the task of developing monitoring reports to a single division—DPA. DPA officials told us that developing such plans requires the knowledge, expertise, and coordination of staff across all five BIE divisions. They said that without participation from the other divisions, it is unlikely the plans will be developed and sent to schools because DPA itself does not have the staff capacity to do so. BIE officials told us they were aware of problems with coordination on high-risk monitoring across the five divisions and were considering how to make improvements, but did not provide a timeframe for doing so. BIE’s monitoring reports and technical assistance plans are intended to provide high-risk schools with important information about their compliance with IDEA and other federal education funding programs, according to agency documentation. Each of BIE’s monitoring reports for the 14 schools in 2018-2019 included multiple findings of school noncompliance with special education requirements under IDEA or Interior regulations. Specifically, monitoring reports for several schools included findings related to their provision of special education services. For example, one report found that a school maintained no service logs and was not able to demonstrate it had provided any services to students. Without timely monitoring reports, schools lack vital information to address areas of noncompliance, including ensuring that staff and contractors provide and document special education services as required. Further, without the technical assistance plans that BIE policy states are to accompany monitoring reports, schools may not know what BIE resources are available to them for addressing specific special education compliance issues. Staff in BIE’s Education Resource Centers Often Lack Expertise to Oversee and Support Schools’ Special Education Programs Staff in BIE’s Education Resource Centers often do not have sufficient expertise on special education to provide appropriate oversight and technical assistance to schools, according to BIE officials. Staff in Education Resource Centers have special education-related responsibilities that include annually verifying that schools are providing special education services and assisting schools when compliance issues with federal special education requirements are identified or when schools request help. Several BIE officials, however, told us these staff often do not have the requisite knowledge about special education to effectively carry out these responsibilities. For example, two senior BIE officials said these staff do not consistently have the expertise required to review documentation on service provision. A staff member at one Education Resource Center said she and her colleagues often do not know what questions to ask school officials during site visits to verify their provision of special education services. Additionally, several officials told us that these staff often do not have the expertise to provide technical assistance to schools on special education. One official said these staff often provide incorrect information to schools because of their lack of expertise. Officials from two schools also told us that some Education Resource Center staff with special education responsibilities do not have sufficient expertise to oversee and assist them with their special education programs. Several BIE officials said Education Resource Center staff need additional training in special education to more effectively carry out their responsibilities. Federal standards for internal control state that agencies should develop staff competencies—including knowledge, skills, and abilities—to achieve agency objectives. However, BIE has not ensured that Education Resource Center staff have the requisite competencies to oversee and support schools’ special education programs because it has not established special education training requirements. Without establishing such requirements and ensuring they are met, staff may not be effective in overseeing and assisting schools with their special education programs, including ensuring that students with IEPs receive required services. School Personnel Did Not Always Know Which Agency Staff to Contact for Support with Their Special Education Programs School officials said they did not always know which BIE staff to contact for support with their special education programs. Staff in BIE’s Education Resource Centers are responsible for regular outreach to schools about these programs, according to two senior BIE officials. However, officials we interviewed from some schools expressed confusion about the roles and responsibilities of various BIE offices and staff responsible for special education or said there has been a lack of outreach from Education Resource Center staff. For example, the special education coordinator at one tribally controlled school said she had received no information about which Education Resource Center was responsible for supporting her school. Several BIE officials acknowledged that schools do not always know which Education Resource Centers are responsible for supporting them. One senior BIE official also said that some schools are not aware that they can reach out to BIE for assistance with their special education programs. BIE’s 2015 Communications Plan prioritizes regular communication with schools to provide key information and important developments affecting their schools. However, BIE officials said Education Resource Center staff do not consistently reach out to inform schools about how they can support schools’ special education programs. Additionally, as part of its recent reorganization, BIE shifted the roles and responsibilities of many offices and staff, including those responsible for supporting special education at schools. Without BIE taking steps to ensure its Education Resource Center staff communicate with all schools regarding their roles and responsibilities on special education, these staff may not consistently do so. As a result, schools may not know whom to contact for answers to questions, which could hinder their ability to provide effective special education services to students. Conclusions The purpose of IDEA is to fulfill the promise that all children with disabilities have available to them special education and related services designed to meet their unique educational needs. In exchange for the funds it receives from Education to implement IDEA, BIE must ensure that such an education is available to all of its students with disabilities. The potential for students with disabilities at BIE schools to advance academically depends, in part, on the ability of BIE to oversee and support schools in providing these students with the special education and related services required by their IEPs under IDEA. It is unclear, however, whether all BIE schools are meeting these students’ needs and ensuring that required services are consistently delivered because schools follow different practices for determining whether to make up services for students when they are missed. Further, the challenges that schools face in obtaining qualified special education staff and specialists to provide services—which may also exist for public schools nationwide— also present BIE with an important opportunity to partner with knowledgeable stakeholders and provide direction in this area. BIE also needs to address persistent administrative capacity issues in special education—such as vacancies and a need for training in key agency offices. In addition, BIE should ensure that relevant offices are reaching out to schools to inform them of their roles in overseeing and supporting schools’ special education programs. Finally, BIE must take steps to make sure its offices annually review school documentation to verify that students are receiving special education and related services and provide high-risk schools selected for targeted monitoring with timely reports and technical assistance plans. In addition to IDEA’s requirement that special education services be provided to all eligible students with disabilities, BIE also has a responsibility to work towards the goal of ensuring that BIE schools are of the highest quality and provide for their students’ unique educational needs. Without taking steps to address weaknesses in key areas of special education, BIE cannot ensure that the schools it funds are meeting their responsibilities under IDEA or addressing the unique needs of more than 6,000 BIE students with disabilities. Recommendations for Executive Action We are making the following seven recommendations to BIE: The Director of BIE should establish consistent requirements for schools on making up missed special education and related services and monitor schools to ensure that they follow these requirements. (Recommendation 1) The Director of BIE should work with knowledgeable stakeholders in Indian education to establish a community of practice or other formal mechanism to identify and disseminate promising practices for schools— especially those in remote locations—on recruiting, hiring, and retaining special education teachers and contracting with providers. The Director of BIE could consider conferring with BIE’s special education advisory committee, OSEP, and relevant tribal and state education officials in addressing this recommendation. (Recommendation 2) The Director of BIE should rescind the policy of its division overseeing tribally controlled schools that does not meet Interior’s requirement to annually review all schools’ documentation to verify the provision of services for every special education student, and ensure that all divisions comply with this requirement. (Recommendation 3) The Director of BIE should update the agency’s workforce plan to include a strategy and timeframe for filling vacant staff positions responsible for overseeing and supporting schools’ special education programs. (Recommendation 4) The Director of BIE should fully implement the agency’s high-risk monitoring policy for IDEA and other federal education programs, including requirements for agency-wide coordination, and ensure that schools selected for such monitoring receive reports and technical assistance plans within 30 days of agency on-site visits, as required by BIE policy. (Recommendation 5) The Director of BIE should establish special education training requirements for staff in the agency’s Education Resource Centers who are responsible for supporting and overseeing schools’ special education programs, and ensure that staff complete those training requirements. (Recommendation 6) The Director of BIE should take steps to ensure that all of the agency’s Education Resource Centers conduct outreach with schools to inform them of their new roles in overseeing and supporting schools’ special education programs under BIE’s reorganization. (Recommendation 7) Agency Comments We provided a draft of this report to the Departments of the Interior (Interior) and Education (Education) for review and comment. Interior provided formal comments, which are reproduced in appendix II, agreeing with all seven recommendations and describing actions BIE plans to take to address them. Education provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of the Interior and Education and interested congressional committees. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our report examines (1) the extent to which eligible Bureau of Indian Education (BIE) students with disabilities are provided the special education and related services required by their individualized education programs (IEP); and (2) the extent to which BIE oversees and supports the provision of these services at its schools. Analysis of Special Education and Related Service Provision at BIE Schools Based on Generalizable Sample Sample Design To obtain a generalizable sample of students, we defined our target population as all students at BIE schools with an active IEP covering a full 5-month period between September 2017 and February 2018 and obtained an electronic listing of IEPs for the 2017-2018 school year—the most recent complete school year at the time of our analysis—extracted from the Native American Student Information System (NASIS). We used these data as a basis to define a sample frame and identified 2,904 unique students with an active IEP for the full period from 169 BIE schools. We assessed the reliability of these data by interviewing knowledgeable agency officials and reviewing technical documentation describing the methodology, assumptions, and inputs used to produce the IEP-related data we received from BIE, upon which we created our generalizable sample. We determined these data to be sufficiently reliable for the purposes of our report. We selected a random two-stage cluster sample of 30 BIE schools and 150 students (about 5 per school) who had at least one active IEP covering the full period from the sample frame of 169 schools and 2,904 students. We chose to use a two-stage sampling approach to control (limit) the number of schools that we would need to coordinate with to collect the school-level data required. Because the number of unique in- scope students ranged between 2 and 88 per school, we chose to select schools with probability proportional to size. We computed the target sample sizes of 30 schools and 150 students (about 5 per school) using estimated standard errors of student age that accounted for the additional variance resulting from the complex sampling approach (two-stage cluster sample) for various sample sizes. We then compared the change in standard errors for various sample sizes of schools and students to those from a simple random sample of size 150. Based on these results, we observed that the decrease in standard errors began to level out at a sample size of 30 schools (n=30) and that selecting more than 5 students (m=5) per school would not significantly decrease the standard errors. To estimate the likely margin of error we expected to achieve from this sample, we conducted a simulation of 10,000 samples of 30 schools and 150 students and examined the distribution of outcomes from these results for 3 proportion estimates. The proportion estimates were designed to provide a range of variance outcomes. Based on this simulation of possible results, we expected this sample design to generate percentage estimates to the sample frame (full population) of students with an overall precision of about plus or minus 12 percentage points or fewer. During our review we learned that one school selected in our sample was under a BIE internal investigation into irregularities in the school’s special education documentation. As a result, we removed the five students at this school from the sample. We added an additional randomly selected school as a replacement. As a result, we completed our analysis for 30 of the 31 schools that we sampled. Additionally, we found that a number of students selected within schools were out of the scope of our defined target population, such as when a student transferred to another school during our review period. When possible, we selected additional cases to account for the out-of-scope students. The final sample included 138 students at 30 schools. Based on the final sample of students, we completed our analysis for 96.5 percent of the students that we sampled that were within the scope of our defined target population. We defined the primary unit of analysis as the student and generated estimates at the student level summarized across 17 of the 18 weeks in the time period of our analysis (between October 2, 2017, and February 2, 2018). We chose not to include data collected for the school week from December 25, 2017, through December 29, 2017 because most schools either did not provide services during this week or were closed. We collected and analyzed the data for students’ scheduled services on a weekly basis. The data collection at this level resulted in multiple, repeated observations for each student. For the purposes of generating weighted, generalizable estimates, these data were summarized at the student level for each service type. The sampling weights were computed at the student level so that estimates from this sample will be made to the population of students. The student weight, which is the inverse of the probability of selection, was computed by combining a stage 1 (school) weight and stage 2 (student within selected schools) weight that each accounted for the probability of selection at each stage. The final student weights varied slightly from school to school based on the number of students selected within each school. The final student weights ranged from 16.13 to 24.20, and most were 19.36. Document Collection We conducted a test run of our document collection and analysis methodology at one BIE-funded school to determine the feasibility of collecting and analyzing school service logs in electronic form. Based on the successful results of the test run, we concluded that this methodology would allow for the collection and analysis of service logs from our sample of schools. We then requested electronic copies of IEPs and any applicable IEP amendments from BIE for the students in our sample. We followed up with BIE on any issues of unclear or missing IEP documentation. After compiling IEPs for the students in our sample, we requested service logs from our sample schools and requested confirmation of key information in students’ IEPs (e.g., the type, duration, and frequency of services for our review period). School File Review and Coding To generate a data set based on schools’ service logs, we coded, by week, information contained in all service logs using a coding scheme that specified type of service (i.e., education vs. related), frequency of services received, duration of services received, and reasons for missed services. To determine the baseline of minutes and frequency for each service, we calculated the duration and frequency of services required in student IEPs and removed service duration and frequency on days that schools were not in session according to school calendars. In cases in which schools did not provide us with service logs for part or all of our review period, we were not able to determine whether the services were received. In such cases, we recorded these minutes in a separate category, labeled “service time not accounted for.” In a small number of instances, schools recorded service log entries, but unclear notation prevented us from being able to determine whether the service was provided. This accounted for less than half of a percent of service time. Because the information contained in school service logs is self-reported by school personnel or service contractors, we were not able to assess the overall accuracy of this information, such as whether services were actually provided—a limitation that generally applies to research relying on self-reported information. We conducted extensive follow-up with schools, however, to ensure the most complete data collection possible and contacted them when further information or clarification was needed to understand service log entries. Additionally, we obtained student attendance data from BIE to compare with entries in service logs from four schools. As the result of this comparison, we removed one student from our sample whose attendance data showed significantly higher absences than were reflected in school service logs. In many cases, we received service logs that did not convey complete information about some aspects of service provision. For example, some logs used non-numerical notation to show that services were provided, such as checkmarks. In these cases, we assumed that a checkmark indicated that one full service was provided and recorded the number of minutes in a typical service. Additionally, some service logs combined multiple services (e.g., 60 minutes of math, 30 minutes of reading, and 30 minutes of writing) into one log and recorded the total number of minutes that services were provided within a week. As we could not determine which services were expected on which days within a week, we adjusted minutes and frequency for combined services when schools were not in session by prorating the weekly totals accordingly. To collect information on reasons for missed services, we categorized recorded reasons into the following groups: (1) student absence; (2) student disciplinary action; (3) provider absence; (4) provider administrative duties; (5) unplanned school closure; (6) school-sponsored activities; (7) testing; and (8) reason not provided. We recorded missing service logs as a separate category (“service time not accounted for”) and did not include them in our analysis of reasons for missed services. Generalizable Results Based on the Sample Estimates from this sample are generalizable to the estimated in-scope population of about 2,600 (+/- 130) students with at least one active IEP covering the period from September 1, 2017, through February 1, 2018. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All estimates in this report have a confidence interval with a margin of error of plus or minus 12 percentage points or fewer, unless otherwise noted. Non-Generalizable Information Collected from Sample Schools In addition to the generalizable data we collected on schools’ special education service provision, we asked school officials to respond to an optional set of questions on the challenges schools face, if any, in providing services. Eighteen of the 30 schools in our sample provided responses. Of the schools who did not respond, we obtained information on challenges with service provision from four additional schools during our site visits, which are described below. Together, we obtained perspectives about the challenges schools face in special education service provision from a total of 22 of the schools in our sample. We also requested information from schools about the circumstances under which providers are expected to make up missed special education services, and the timeframe in which these make-up services are expected. Twenty-three of the 30 schools in our sample provided a response. Site Visits To help inform both of our research objectives, gather additional information about schools’ special education programs, and explore issues related to their provision of special education and related services, we conducted site visits to seven schools in our sample located in New Mexico (4 sites) and Arizona (3 sites), selected for their large numbers of BIE-funded schools. Our criteria for selecting schools included special education student enrollment size, whether a school was operated by BIE or a tribe, and tribal affiliation. At each site, we gathered information from participants—including school administrators and teachers—using semi-structured interview questions. We collected information on school staff’s roles and responsibilities in administering and overseeing special education; policies, practices, and any challenges to providing and documenting special education and related services; and perspectives on guidance and support, if any, from relevant BIE offices. Our site visits also included meetings with BIE officials in Albuquerque, New Mexico, and Window Rock, Arizona. Our interviews with officials focused on their roles and responsibilities in overseeing and supporting schools’ special education programs; staff capacity; intra-agency coordination on special education; policies and procedures related to special education monitoring; and their views on factors, if any, that may affect schools’ ability to provide special education and related services to students with IEPs. Interviews and Reviews of Relevant Documents To inform both research objectives, we also interviewed officials in several BIE offices with responsibilities for overseeing and supporting schools’ special education programs, including: the Office of the Director; the Division of Performance and Accountability; the Office of the Associate Deputy Director-Tribally Controlled Schools; the Office of the Associate Deputy Director-Bureau Operated Schools; and the Office of the Associate Deputy Director-Navajo Schools. Our interviews with agency officials focused on their roles and responsibilities in overseeing and supporting schools’ special education programs; staff capacity; intra- agency coordination on special education; policies and procedures related to special education monitoring; and their views on factors, if any, that may affect schools’ ability to provide special education and related services to students with IEPs. We compared BIE’s oversight and technical assistance activities against requirements under IDEA and Department of the Interior (Interior) regulations, BIE policies and procedures, and federal standards for internal control to evaluate the sufficiency of their efforts in monitoring and supporting BIE schools’ special education programs. We also conferred with Interior’s Office of the Solicitor regarding their position on whether one BIE division’s policy for reviewing special education documentation at schools conformed to Interior’s regulations. Additionally, we interviewed current and former members of BIE’s advisory committee on special education to obtain their views on the extent to which BIE schools provide required services to students with IEPs and challenges, if any, that schools may face in delivering services. We also interviewed national groups with expertise on Indian education and BIE schools—including the National Congress of American Indians, the National Indian Education Association, and the Tribal Education Departments National Assembly—to obtain their views on special education and related services at BIE schools. Our review of relevant documentation included BIE’s monitoring and technical assistance policies and procedures as well as relevant federal laws and regulations, including requirements under IDEA Part B. This included BIE’s May 2018 policy and procedures on conducting high-risk monitoring of the implementation of federal education programs at BIE schools. In addition, we reviewed the Department of Education’s determination letters and October 2019 monitoring report to BIE assessing the agency’s compliance with IDEA requirements. We conducted this performance audit from July 2018 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Interior Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Melissa Emrey-Arras, (617) 788-0534 or emreyarrasm@gao.gov In addition to the contact named above, Elizabeth Sirois (Assistant Director), Edward Bodine (Analyst-in-Charge), Liam O’Laughlin, and Angeline Bickner made key contributions to this report. James Ashley, Susan Aschoff, Serena Lo, John Yee, James Rebbe, Sam Portnow, Aaron Karty, James Bennett, Avani Locke, and Olivia Lopez also contributed to this report. Related GAO Products Tribal Programs: Resource Constraints and Management Weaknesses Can Limit Federal Program Delivery to Tribes. GAO-20-270T. Washington, D.C.: Nov 19, 2019. High Risk: Progress Made but Continued Attention Needed to Address Management Weaknesses at Federal Agencies Serving Indian Tribes. GAO-19-445T. Washington, D.C.: March 12, 2019. High Risk: Agencies Need to Continue Efforts to Address Management Weaknesses of Federal Programs Serving Indian Tribes. GAO-18-616T. Washington, D.C.: June 13, 2018. Indian Affairs: Further Actions Needed to Improve Oversight and Accountability for School Safety Inspections. GAO-17-421. Washington, D.C.: May 24, 2017. Indian Affairs: Actions Needed to Better Manage Indian School Construction Projects. GAO-17-447. Washington, D.C.: May 24, 2017. Tribal Transportation: Better Data Could Improve Road Management and Inform Indian Student Attendance Strategies. GAO-17-423. Washington, D.C.: May 22, 2017. Indian Affairs: Key Actions Needed to Ensure Safety and Health at Indian School Facilities. GAO-16-313. Washington, D.C.: March 10, 2016. Indian Affairs: Preliminary Results Show Continued Challenges to the Oversight and Support of Education Facilities. GAO-15-389T. Washington, D.C.: February 27, 2015. Indian Affairs: Bureau of Indian Education Needs to Improve Oversight of School Spending. GAO-15-121. Washington, D.C.: November 13, 2014. Indian Affairs: Better Management and Accountability Needed to Improve Indian Education. GAO-13-774. Washington, D.C.: September 24, 2013.
Why GAO Did This Study BIE funds 185 elementary and secondary schools that serve more than 6,000 Native American students with special needs. The Department of Education has raised concerns about BIE's implementation of IDEA in recent years, including its long-standing noncompliance with IDEA requirements. GAO was asked to examine the provision of special education and related services to eligible BIE students. This report examines the extent to which (1) BIE students with disabilities are provided the special education and related services required by their IEPs, and (2) BIE oversees and supports the provision of special education at its schools. GAO analyzed data on special education and related services for a generalizable sample of 138 BIE students with IEPs at 30 schools over a 4-month period in school year 2017-2018 (the most recent complete school year at the time of our analysis); compared BIE special education practices with its policies and Interior and IDEA requirements; visited schools in two states selected for their large numbers of BIE schools; and interviewed school and agency officials. What GAO Found Schools funded by the Bureau of Indian Education (BIE) are required under the Individuals with Disabilities Education Act (IDEA) to provide services for eligible students with disabilities, such as learning disabilities or health impairments. Services for these students are listed in individualized education programs (IEP). GAO found that BIE schools did not provide or did not account for 38 percent of special education and related service time for students with disabilities, according to analysis of school documentation for a 4-month review period (see fig.). This included one school that did not provide any services to three students. While BIE has plans to improve documentation of such services, it has not established whether and when missed services should be made up, which has led to inconsistent practices among schools. Establishing consistent requirements for making up missed services could help students receive the special education and related services they need to make academic progress. BIE's limited monitoring and technical assistance have hindered its oversight and support for special education at schools. For example: A division of BIE responsible for overseeing about half of all BIE schools decided to verify the provision of special education services at only one-third of its schools per year, although the Department of the Interior (Interior) requires BIE to annually verify the provision of services at all schools. BIE provided required monitoring reports late and did not provide required technical assistance plans to 14 schools that BIE determined were at high risk of not complying with IDEA and other federal education programs in school year 2018-2019. BIE officials said that the field office staff responsible for working with schools on special education often do not have the requisite expertise, which has hampered their oversight and support to schools. Without verifying special education services at every school annually, following high-risk monitoring and technical assistance requirements, and providing training to its staff, BIE cannot ensure that the schools it funds are meeting their responsibilities under IDEA. Strengthening such oversight and support activities can help BIE as it works to address the unique needs of students with disabilities to help prepare them for future education, employment, and independent living. What GAO Recommends GAO is making seven recommendations, including that BIE establish consistent requirements for schools on making up missed services, annually verify special education services at all schools, comply with high-risk monitoring and technical assistance requirements, and ensure that BIE staff receive needed training. Interior agreed with the recommendations.
gao_GAO-20-227
gao_GAO-20-227_0
Background CCDF Laws and Regulations The Child Care and Development Block Grant (CCDBG) Act, as amended, is the main federal law governing state child-care programs for low-income working families. The act was reauthorized in 2014, and the reauthorization included a focus on improving the overall quality of child- care services and development of participating children. In September 2016, OCC published new rules (CCDF regulations) to provide clarity to states on how to implement this law and administer the program in a way that best meets the needs of children, child-care providers, and families. The CCDBG Act and CCDF regulations allow states flexibility in developing CCDF programs and policies that best suit the needs of children and parents within that state. According to OCC, these new rules also align child-care requirements with new Head Start regulations, including certain requirements for background checks, annual monitoring, and prelicensure inspections for some CCDF providers. OCC also added regulatory requirements for state lead agencies to describe in their State Plans effective internal controls that are in place to ensure integrity and accountability including 1. processes to ensure sound fiscal management, 2. processes to identify areas of risk, 3. processes to train child-care providers and staff of the lead agency and other agencies engaged in the administration of the CCDF about program requirements and integrity, and 4. regular evaluation of internal control activities. Lead agencies are also required to describe in their State Plans the processes that are in place to identify fraud or other program violations, and to investigate and recover fraudulent payments and to impose sanctions in response to fraud. CCDF Program Administration OCC is a program office within ACF that works with the states to administer the CCDF program. OCC and states each have responsibility for overseeing and protecting the integrity of the CCDF program. Each state must develop, and submit to OCC for approval, a State Plan that identifies the purposes for which CCDF funds will be spent for a 3-year grant period and designates a lead agency responsible for administering child-care programs. To administer CCDF funds, federal law and regulations require that states report their CCDF expenditures and data on the number of children served by CCDF subsidies. The current reporting structure as described by OCC and ACF officials is shown in figure 1. State Plan Review and Approval Process To request funding from the CCDF, states submit a State Plan for administering their CCDF programs to OCC. OCC provides states with a Plan Preprint, which serves as a template and includes instructions and guidance on developing the State Plans and providing information required by law and regulations. Further, OCC has used the Plan Preprint to request additional information from the states. The Plan Preprint developed for fiscal years 2019–2021 State Plans consists of eight sections and is the first to include the new CCDF regulatory requirements, added in September 2016 as required by the 2014 reauthorization. One of the new requirements is for state lead agencies to describe in their State Plans effective internal controls that are in place to ensure integrity and accountability. In addition, OCC modified the Plan Preprint for fiscal years 2019–2021 State Plans to add the instruction requesting states to report information about the results of their program-integrity and fraud- fighting activities, in addition to providing descriptions of the activities themselves. The Secretary of Health and Human Services, through OCC, has the responsibility to approve State Plans that satisfy the requirements, and review and monitor state compliance with the approved State Plan. According to OCC officials, the Program Operations Division within OCC, in partnership with the OCC regional program unit staff (regional offices), reviews the State Plans and approves those that they determine have satisfied the requirements of the CCDBG Act and CCDF regulations. CCDF Improper-Payment Reporting The CCDF has been designated as a high-priority program, as defined by OMB, under the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA), meaning that it is a program susceptible to significant improper payments. Federal statutes require federal agencies to evaluate programs for improper-payment risk and, for programs susceptible to significant improper payments, to report on actions taken to reduce improper payments. CCDF regulations implement these requirements by requiring states to calculate and report estimates of their improper payments, including proposed actions to address sources of error. These reports are developed by the states on a 3-year rotational cycle, and HHS reports the aggregate results in its Agency Financial Report. The CCDF gross improper payment estimate for fiscal year 2019 is approximately $325 million, and the estimated improper payment rate is 4.53 percent. OCC oversees states’ compliance with the prescribed procedures for estimating improper-payment error rates by approving the preliminary documents, approving any changes to the case samples, conducting the Joint Case Reviews, and reviewing and approving the final State Improper Payments Report and CAP submissions. If a state reports an error rate at or above 10 percent, it must also submit a CAP, which includes detailed descriptions of specific activities planned to reach a targeted reduction in errors. It must then submit an update on its progress and a new CAP the following year if it has not completed the proposed corrective actions or if the error rate is still at or above 10 percent. The improper-payment reporting process is illustrated in figure 2. OCC Monitoring System In fiscal year 2019, OCC launched a formal Monitoring System to review a selection of states annually over the course of the 3-year State Plan period. According to OCC officials, the three main purposes of the Monitoring System are to: (1) ensure compliance with the CCDBG Act, CCDF regulations, and the approved State Plans; (2) identify state technical-assistance needs; and (3) identify promising practices to inform continuous quality improvement. The Monitoring System focuses on 11 topic areas, which include program integrity and accountability. In addition, other topic areas include disaster preparedness, consumer education, and health and safety requirements. OCC officials told us that monitoring is completed on a rolling basis, and that they plan to monitor one-third of states each fiscal year, from fiscal years 2019 to 2021. According to OCC officials, they scheduled the monitoring to ensure that a state will not be submitting an improper- payment report in the same year that it participates in the monitoring. Figure 3 provides additional details regarding the OCC Monitoring System process, which includes an on-site visit to monitored states. Fraud Risk Management Fraud and “fraud risk” are distinct concepts. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. Although the occurrence of fraud indicates there is a fraud risk, a fraud risk can exist even if fraud has not yet been identified or occurred. For example, suspicious billing patterns or complexities in program design may indicate a risk of fraud even though fraud has not been identified or occurred. When fraud risks can be identified and mitigated, fraud may be less likely to occur. According to federal standards and guidance, executive-branch agency managers are responsible for managing fraud risks and implementing practices for combating those risks. Specifically, federal internal control standards state that management should consider the potential for fraud when identifying, analyzing, and responding to risks. As part of these standards, management assesses risks the entity faces from both external and internal sources. In addition, in July 2015, GAO issued the Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. The Fraud Risk Framework describes leading practices in four components, as shown in figure 4. The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, required OMB to establish guidelines for federal agencies to create controls to identify and assess fraud risks, and design and implement antifraud control activities. The act further required OMB to incorporate the leading practices from the Fraud Risk Framework in the guidelines. In July 2016, OMB published guidance about enterprise risk management and internal controls in federal executive departments and agencies. Among other things, this guidance affirms that managers should adhere to the leading practices identified in the Fraud Risk Framework. OCC Provides Oversight by Approving State Plans but Has Not Established Policies for Reviewing State Plans and Has Not Defined Its Informational Needs As part of its oversight of states’ CCDF programs, OCC reviewed and approved State Plans for the current grant period (fiscal years 2019– 2021). However, OCC has not established written policies to guide staff review and approval of these State Plans, a process that occurs every 3 years. OCC’s lack of established policies limits its ability to ensure that staff follow appropriate protocols for consistency when reviewing and approving State Plans and to retain organizational knowledge in the event of staff turnover, which OCC noted as occurring during each review period. Further, OCC requested that states report information about the results of states’ program-integrity activities. However, most of the State Plans that it approved did not provide the results of states’ program- integrity activities as requested. OCC officials told us that they plan to continue to request that states report on the results of their program- integrity activities, but OCC has not identified what it considers to be “results” of program-integrity activities. Without taking additional steps to define its informational needs and encourage states to report the results of their program-integrity activities, OCC will not have this information to help determine whether states are effectively ensuring the integrity of the CCDF program. OCC Reviewed and Approved State Plans To provide oversight of states’ CCDF program-integrity activities, OCC reviewed and approved State Plans for the current grant period, covering fiscal years 2019–2021. To do so, OCC officials described to us a process that began with a high-level review of the draft State Plans submitted through an electronic system. After an initial review for completeness, OCC staff focused on the contents of the State Plans including states’ responsiveness to each requirement. For example, one requirement is to describe the processes that the state will use to identify risk in its CCDF program. OCC officials also stated that they consider clarity, consistency, and compliance when assessing State Plans. OCC officials also explained that they reviewed the responses to determine whether they were sufficiently detailed, and sought clarification from the states when necessary. OCC officials stated that, prior to the final approval of the State Plans, staff completed a validation form that consists of a table listing the State Plan subsections with checkboxes next to each subsection. Figure 5 outlines the timeline for review and approval of State Plans. OCC Does Not Have Finalized Written Policies to Implement the Review Process OCC has developed a draft procedure for the State Plan review and approval process, but had neither finalized written policies before beginning its review of the fiscal years 2019–2021 State Plans, nor finalized written policies for future review periods that occur every 3 years. Instead, OCC officials told us that for the review and approval process completed in 2018, they provided their staff a variety of training materials and draft documents that encouraged discussion among those involved. These documents contained information and guidance on the process, such as explaining the overall operational processes for reviewing and approving State Plans and general roles and responsibilities. However, none of the documents were finalized as OCC’s written policies for staff to follow when implementing the fiscal years 2019–2021 State Plan review and approval process, or for subsequent review periods. In response to our request for finalized policies pertaining to how OCC reviewed and approved State Plans, OCC provided documents that have substantial limitations for explaining to OCC staff how they should review and approve State Plans. For example, OCC provided what it characterized as a three-page summary protocol, which, in part, contained a historical record of what occurred during the recently completed review period rather than guidance that would help OCC achieve its State Plan review objectives on a continuous basis. Specifically, the protocol describes the regular internal meetings and interactions that OCC staff had from September 2018 to December 2018. As such, the protocol does not describe the process that OCC staff should follow, or the meetings that should occur, when reviewing and approving State Plans in future years (i.e., on a continuous basis). OCC also developed in August 2018 a more-detailed draft procedure for reviewing and approving State Plans. The draft procedure contains information on the communication process between the central and regional offices, recognizes that there may be variation in internal processes among regional offices and from one review period to the next, and includes guidance on steps for resolving questions about State Plans, among other guidance. Unlike the three-page summary protocol, the draft procedure explicitly states its applicability to future review periods as well as the current State Plan review period, and therefore would have provided guidance for staff on a continuous basis had a finalized version been shared with staff and established as OCC’s written policies. However, because of the volume of work and differences in caseloads among regional offices, OCC officials stated that they did not share a finalized procedure with staff and that staff were neither expected nor required to use the draft procedure when conducting their review of State Plans for the fiscal years 2019–2021 review period. As such, this draft procedure did not represent the formal policies for staff to follow in performing their roles. In explaining why it relies on the three-page summary protocol and draft procedure rather than finalized written policies to guide its State Plan review and approval process, OCC officials stated that OCC needs flexibility in its policies during the review period. Specifically, there are staffing changes in both the central and regional offices for each State Plan review period, and having flexibility within the framework provided by the three-page summary protocol allows them to accommodate those changes. OCC officials noted that some of the processes are unique to each of the 10 regional offices because of differences in their structure, staffing, and caseloads. Likewise, OCC officials stated that the regional offices need flexibility to continuously adjust processes and timelines so that they can accommodate varying responsiveness from states, and evaluate the State Plans without undermining the flexibility afforded to states through the block grant. However, it is possible for OCC to establish written policies to guide processes that are common from one review period to the next, and across all regions, while still maintaining the necessary flexibility to accommodate staffing changes and regional differences, as it had already begun to do by developing its August 2018 draft procedure. In this regard, Standards for Internal Control in the Federal Government states that management should implement control activities through policies. In doing so, management communicates the policies to personnel so that personnel can implement the control activities for their assigned responsibilities. Further, Standards for Internal Control in the Federal Government includes minimum documentation requirements, such as that management develop and maintain documentation of its internal control system. An internal control system is a continuous built-in component of operations that provides reasonable assurance that an entity’s objectives will be achieved. Internal control is not one event, but a series of actions that occur throughout an entity’s operations. Further, internal control is recognized as an integral part of the operational processes management uses to guide its operations, and internal control is built into the entity as a part of the organizational structure to help managers achieve the entity’s objectives on an ongoing basis. As such, documentation of the internal control system should reflect a continuous, built-in component of operations rather than a historical record of a past event. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. OCC’s lack of established written policies limits its ability to ensure that staff follow appropriate protocols on a continuous basis when implementing the State Plan review and approval process, and limits its ability to provide a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Without finalizing written policies, an effort that could include leveraging its previously developed August 2018 draft procedure, OCC risks losing that knowledge each time there are staffing changes among central and regional offices. OCC Has Not Defined Information Needed to Analyze States’ Program- Integrity Results In response to a 2016 HHS OIG report, OCC has attempted to collect information about the results of states’ program-integrity and fraud- fighting activities by adding a new instruction to the fiscal years 2019– 2021 Plan Preprint requesting states to report such information in their State Plans. Specifically, the HHS OIG recommended that collecting data on program-integrity and fraud-fighting results would be an important step in monitoring states’ efforts to safeguard the CCDF program. Additionally, OCC officials told us that obtaining information on the results of program-integrity activities is important for understanding national trends and helping to inform OCC’s technical assistance to states and ensure states’ accountability over their program-integrity activities. However, our review of 51 approved State Plans found that 43 State Plans (about 84 percent) did not report the results of program-integrity activities as requested (see fig. 6). The other eight states (about 16 percent) reported the results of program-integrity activities. State Plans must meet the requirements set forth in the law and the CCDF regulations to be approved. OCC officials told us that the State Plans were approved without the information on the results of program- integrity activities because, although there are instructions in the Plan Preprint for states to report this information, the CCDF regulations do not require it. Further, OCC officials told us that when OCC submitted the Plan Preprint to OMB for approval under the Paperwork Reduction Act, OCC had indicated that the program-integrity results would be collected on an informational basis, and states would not be required to provide this information. According to an OCC official, only portions of the Plan Preprint with instructions for states to report on the results of program- integrity activities were requested on an informational basis, and all other information in that section was required for approval of the State Plans. OCC officials also told us that OCC will continue to request that states report on the results of their program-integrity activities in the State Plans, but OCC has not defined what information it needs regarding the “results” of states’ program-integrity activities and has not communicated the need to states or its staff. OCC officials told us that they will ensure that states submit this information by providing guidance to states on the purpose of collecting this information. However, OCC was not able to provide us with a definition or examples of what it considers to be “results” of program- integrity activities that would be helpful for ensuring states’ accountability over their program-integrity activities. In addition, OCC officials said that OCC did not communicate to states that the information about the results of program-integrity activities was being requested on an informational basis only. According to OCC officials, OCC did not specifically communicate its intention to states because it wanted states to provide a response, if possible. Similarly, OCC had not developed any specific internal criteria for its staff to use when reviewing State Plans to determine whether certain responses were sufficient for their informational needs, such as to better understand national trends. OCC officials also stated that there was no internal written guidance explaining to OCC staff that such information was not required for State Plan approval. Rather, this standard was communicated to staff during weekly meetings. Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives. In doing so, management identifies the information requirements needed and defines the information requirements at the relevant level and requisite specificity for appropriate personnel. Further, Standards for Internal Control in the Federal Government states that management should internally and externally communicate the necessary quality information to achieve the entity’s objectives. In this context, after defining its informational needs regarding the results of program-integrity activities, OCC’s internal and external communication could include communication to the states, which are requested to include this information in the State Plans, and to its staff who will be responsible for analyzing this information. Until OCC defines what information it needs regarding program-integrity activity results, it will be limited in its ability to obtain quality information. By not communicating informational needs to states and staff, OCC will continue to lack quality information about the results of states’ program-integrity efforts and will not be able to use that information to analyze national trends and help ensure states’ accountability over their program-integrity activities, as described. OCC Provides Oversight of States’ Improper Payment Risks but Lacks Documented Guidance for Assessing States’ Corrective Actions Since 2013, seven states with improper-payment rates at 10 percent or above have submitted 14 corrective action plans (CAP) to OCC for review. However, OCC does not have any documented criteria to guide the review of the CAPs submitted by states to ensure the proposed actions are aimed at root causes of improper payments and are effectively implemented. OCC also has not documented the procedures it uses to follow up with states subject to CAPs, but said it is planning to. OCC Lacks Guidance for Ensuring Corrective Actions Are Aimed at Root Causes and Effectively Implemented Federal improper-payment statutes require federal agencies to review programs susceptible to significant improper-payment risks and develop actions to reduce improper payments. For example, the Improper Payments Elimination and Recovery Act of 2010 (IPERA) specifically requires agencies administering programs that are susceptible to significant improper payments, such as the CCDF, to report on actions the agency is taking to reduce improper payments. Because the CCDF is administered by states, this requirement is implemented in CCDF regulations by requiring states reporting improper-payment error rates at or above 10 percent to develop and implement CAPs. The OMB guidance implementing IPERA states that agencies should ensure that each corrective action is specifically aimed at a root cause of improper payments and that the actions are effectively implemented to prevent and reduce improper payments. According to this guidance, a root cause is something that would directly lead to an improper payment and, if corrected, would prevent the improper payment. In the proposed rulemaking in which OCC introduced the CAPs, OCC stated that the CAPs are intended to be comprehensive and detailed, so as to improve upon the descriptions of corrective actions already reported on a 3-year cycle, which sometimes lack detail or specificity. OCC officials told us that OCC reviewers use their CAP Review Tool to evaluate the CAPs for approval, which also lays out the protocol for conducting reviews. However, the CAP Review Tool does not require reviewers to document whether the corrective actions proposed by states are aimed at root causes of improper payments, or effectively implemented. Further, the written review procedure that accompanies the CAP Review Tool does not contain guidance for reviewers on evaluating whether corrective actions are aimed at root causes and are effectively implemented. OCC officials explained to us that, in their view, states are in the best position to identify the most-feasible approach to corrective actions based on their individual circumstances. We acknowledge that states should have flexibility to identify corrective actions based on their individual circumstances. However, according to OMB guidance, it is federal agencies that are to ensure that corrective actions are aimed at root causes of improper payments and effectively implemented. Further, in the proposed rulemaking in which OCC introduced the CAPs, OCC stated that it intended the CAPs to be used for OCC to hold states accountable as part of its compliance with IPERA. Accordingly, without providing additional guidance to its reviewers, OCC will lack assurance that states’ proposed corrective actions are aimed at root causes and effectively implemented. OCC officials also stated that the majority of the seven states subject to CAPs reduced their error rates over time, specifically to below 10 percent. OCC officials explained that this determination is based on the submission of the State Improper Payment Report for the next required reporting cycle or on states’ voluntarily conducting a review of a sample of cases and submitting the results to OCC to demonstrate they had reduced their error rate to below 10 percent. We did not independently corroborate OCC’s determination because assessing the reliability of the self-attested internal error-rate reviews conducted by certain states and reviewing this information was outside the scope of our work. However, as part of our review of the 14 CAPs that have been submitted to OCC in response to OCC’s improper-payment reviews since 2013, we found that one state was required to submit CAPs for 3 consecutive years and consistently proposed the same error-rate reduction targets, with different dates. This observation underscores the need to ensure the corrective actions a state proposes are specifically aimed at root causes of improper payments and are effectively implemented. OCC does not have guidance in place for its reviewers to determine whether the ongoing corrective actions a state proposes to reduce improper payments will be specifically aimed at root causes of improper payments and effectively implemented. This could leave the CCDF program at continued risk of improper payments. OCC Plans to Document Its CAP Follow-up Process OCC does not have written policies for its CAP follow-up process or documentation that follow-up has been completed for past CAPs. OCC officials told us that they plan to develop such written policies, but officials did not specify a timeline for completion. OCC officials described their process used to monitor states while they are subject to a CAP, which includes additional contact when the same state has been subject to CAPs for consecutive years. This CAP follow-up process is illustrated in figure 7. According to OCC officials, OCC intends to develop written policies for the CAP follow-up process but did not provide a time frame for completion. This will include, at a minimum, a written protocol for the activities illustrated above, which will be included in the next revision of the instructions given to states for improper-payment reporting. According to OCC officials, each region currently has its own process for documenting discussions with CAP states. Having established written policies for the CAP follow-up process will help ensure that OCC’s oversight and monitoring of CAPs is carried out consistently. OCC Has Taken Some Steps to Monitor States’ Program-Integrity Activities but Does Not Evaluate Their Effectiveness OCC Has Initiated a Monitoring System, but the System Does Not Assess Effectiveness of States’ Program-Integrity Control Activities OCC officials told us that their Monitoring System, initiated in fiscal year 2019, plays a part in OCC’s role to ensure that states’ program-integrity activities are effective. According to OCC officials, OCC uses two tools as part of its Monitoring System—a Compliance Demonstration Packet and Data Collection Tool. States complete the Compliance Demonstration Packet to outline how they propose to demonstrate compliance with regulatory requirements and implementation of the approved State Plans throughout the Monitoring System’s phases. For example, to show effective internal controls are in place to ensure integrity and accountability, states may provide OCC with state or local policies and manuals (previsit phase), and may submit to interviews or provide system demonstrations (on-site visit phase). OCC staff use the Data Collection Tool to record comments about the evidence observed, and to note whether additional follow-up is needed. Both of these tools contain language indicating that the effectiveness of states’ program-integrity and fraud-fighting activities are evaluated by OCC staff. For purposes of the Monitoring System, OCC officials said that states have broad flexibility to propose, in the Compliance Demonstration Packet, what documents and evidence to provide. In addition, states have the flexibility to propose how the state will demonstrate compliance with regulatory requirements. This includes the requirement to describe in its State Plan effective program-integrity control activities, which includes fraud-fighting activities. OCC officials further told us that OCC does not collect the same set of information or evidence across the country. Rather, OCC collects state-specific information based on what each individual state proposes. For example, the Compliance Demonstration Packet allows states to propose an approach for demonstrating their compliance with the requirement to describe in their State Plans effective internal controls that are in place to ensure integrity and accountability. OCC officials said the primary purpose of the Monitoring System is to ensure that states are in compliance with CCDF regulations and implementing the State Plans as approved, rather than to make an assessment of the efficacy of the State Plans. When we asked OCC officials how they determine whether a state has provided appropriate and adequate documentation for the purposes of the Monitoring System, these officials told us that staff develop specific questions for each state and look for evidence showing that states are implementing the State Plans as approved. For example, OCC officials might look for evidence of a state’s implementation of certain program-integrity activities described in its approved State Plan to verify that the activities described are in place. OCC officials also stated that staff decide what is acceptable through consensus and attempt to build consistency through internal discussions regarding the appropriateness of the material that states provide. However, there are no specific criteria to guide OCC staff’s assessment of the effectiveness of states’ program-integrity activities during these discussions. For example, there are no specific criteria to help OCC staff assess whether states’ implemented control activities are effective at identifying areas of risk. OCC officials stated that the CCDF regulations and the approved State Plans are the most-detailed criteria that they use to assess data collected for the Monitoring System. However, neither the CCDF regulations nor the State Plans include specific criteria for assessing whether the control activities are effective. OCC is responsible for monitoring states’ compliance with the CCDF regulations, and these regulations explicitly require that states describe in their State Plans “effective internal controls that are in place to ensure integrity and accountability.” According to Standards for Internal Control in the Federal Government, an effective internal control system has a monitoring component that is effectively designed, implemented, and operating. Additionally, a leading practice of the Fraud Risk Framework is to examine the suitability of existing fraud controls. Managers who effectively implement an antifraud strategy monitor and evaluate the effectiveness of preventive activities in this strategy and take steps to help ensure external parties with responsibility over fraud control activities effectively implement those activities. Without developing and using criteria to assess whether states’ program-integrity control activities are effective, OCC cannot ensure that states’ internal controls for program integrity are effective. Likewise, without examining the suitability of, and monitoring the effectiveness of, the states’ fraud control activities, OCC will be challenged in effectively implementing an antifraud strategy to minimize the risk of fraud in the CCDF program. OCC Has Developed Technical Assistance to Improve Program Integrity and Has Further Opportunities to Use These Tools to Monitor States’ Program-Integrity Activities OCC developed the Grantee Internal Controls Self-Assessment Instrument (Self-Assessment Instrument) in 2010 and makes the technical-assistance tool available to the states through its website. In response to a 2016 HHS OIG report, ACF officials said that OCC would use the Self-Assessment Instrument to address the report’s recommendations to request that states examine the effectiveness of their program-integrity and fraud-fighting activities, and examine with states the benefits of expanding such activities. The Self-Assessment Instrument contains five sections: (1) Eligibility Determination and Review; (2) Improper Payment Case Review Process; (3) Fraud and Overpayment Prevention, Detection, and Recovery; (4) Federal Reporting; and (5) Audits and Monitoring. According to OCC officials, as of August 2019, 19 states have completed the Self-Assessment Instrument since its inception. OCC officials stated that use of the Self-Assessment Instrument is based entirely on states’ self-identified risks, and states are free to choose which, if any, of the sections to complete. OCC officials have noted benefits as a result of states completing the Self-Assessment Instrument. Specifically, OCC officials said that states have improved their implementation processes and policies, and improper-payment error rates have decreased. In addition to making the tool available to states, OCC officials told us that OCC also provides technical assistance in completing the Self-Assessment Instrument, which may include an on-site facilitated discussion. The facilitated discussion may cover areas including control activities to identify and prevent fraud, and strategies to investigate and collect improper payments. Following the on-site facilitated discussion, an OCC contractor compiles a report summarizing state-identified issues to address in states’ policies and procedures, according to one OCC official. However, OCC officials told us that states are not required to act on this report. In addition to the Self-Assessment Instrument, OCC has recently coordinated on the development of the Fraud Toolkit, which is a series of electronic spreadsheets that states can use to respond to questions about their fraud risk management activities—such as staff training, procedures for addressing suspected fraud, and program administration. The tools assign risk levels to these areas based on the state’s responses, and will also include recommended next steps for each of those areas and generate a report to summarize overall risk. For example, data from these tools would indicate whether states’ CCDF program staff are trained to identify forms, such as wage stubs or employer letters that may have been forged or altered. The data would also indicate whether the state has a fraud referral process in place to expedite investigations. OCC makes the Fraud Toolkit available for states to use upon request. However, other than making the tool available, OCC officials said that OCC does not usually have any further involvement in states’ use of the tool. OCC officials told us that they do not plan to use either the Self- Assessment Instrument or the Fraud Toolkit to collect data about states’ CCDF programs because both the Self-Assessment Instrument and the Fraud Toolkit are intended as primarily technical-assistance tools rather than monitoring tools or data-collection instruments. OCC officials also told us that, to formally collect information from states’ use of such tools, they would need to seek approval from OMB. OCC officials stated that OCC’s goal is to develop technical assistance that best meets the needs of the states, and not to impose additional reporting requirements on the states. Officials also noted a concern that states could cease to participate in or accept technical assistance if such assistance is seen as increasing reporting requirements. However, according to OCC officials, OCC has not conducted a cost-benefit analysis of collecting such information. Leading practices in the Fraud Risk Framework are to monitor and evaluate the effectiveness of preventive activities; collect and analyze data; and adapt activities to improve fraud risk management. Further, although external parties—in this case, the state lead agencies—may be responsible for specific fraud control activities, Standards for Internal Control in the Federal Government states that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. As part of these standards, management retains responsibility for monitoring the effectiveness of internal control over the assigned processes performed by external parties. Management is responsible for meeting internal control objectives, and may decide how the entity evaluates the costs versus benefits of various approaches to implementing an effective internal control system. However, cost alone is not an acceptable reason to avoid implementing internal controls, and cost-benefit considerations support management’s ability to effectively design, implement, and operate an internal control system that balances the allocation of resources and other factors relevant to achieving the entity’s objectives. By not evaluating the feasibility of collecting information from the Self-Assessment Instrument or the Fraud Toolkit— such as evaluating the feasibility of doing so during its Monitoring System process—OCC may be missing an opportunity to monitor the effectiveness of the internal control system to help states adapt control activities to improve fraud risk management. OCC’s Program- Integrity and State- Oversight Activities Are Not Informed by a Fraud Risk Assessment As described above, OCC has developed several program-integrity activities that could help assess and manage fraud risk if they were part of an antifraud strategy. For example, the improper-payment reporting process and Monitoring System are not specific to fraud but may generate information relevant to fraud risks. However, according to OCC officials, ACF has not completed a fraud risk assessment for the CCDF, which would provide a basis for the development of an antifraud strategy that describes the program’s approach for addressing prioritized fraud risks identified, as described in the Fraud Risk Framework. The Assess component of the Fraud Risk Framework calls for federal managers to plan regular fraud risk assessments and to assess risks to determine a fraud risk profile. Furthermore, Standards for Internal Control in the Federal Government states that management should consider the potential for fraud when identifying, analyzing, and responding to risks. Leading practices for planning fraud risk assessments include tailoring the fraud risk assessment to the program and planning to conduct the assessment at regular intervals and when there are changes to the program or operating environment. The leading practices also include identifying the tools, methods, and sources for gathering information about fraud risks and involving relevant stakeholders in the assessment process. The Fraud Risk Framework also identifies leading practices for conducting fraud risk assessments and documenting the program’s fraud risk profile, as illustrated in figure 8. As discussed in the Fraud Risk Framework, the fraud risk profile provides a basis for managers to develop and document an antifraud strategy that describes the program’s approach for addressing prioritized fraud risks identified. According to ACF, there is currently a process in place at the ACF level that will lead to the development of a Fraud Risk Assessment. Specifically, ACF is in the process of developing a Fraud Risk Assessment template, which will include a program fraud risk profile. The CCDF will be part of the pilot program for this effort. The Fraud Risk Assessment template will consider the Fraud Risk Framework as well as guidance contained in OMB Circular A-123, Management’s Responsibility for Enterprise Risk Management and Internal Control, according to OCC officials. These officials also stated that ACF will leverage its previously developed and implemented risk assessments, including the Program Risk Assessment that was completed for the CCDF between fiscal years 2011 and 2016 as part of the HHS Program Integrity Initiative. However, according to ACF, the development of a Fraud Risk Assessment template is currently on hold due to competing priorities. The ACF stated the agency expects to resume the process by December 2019, and OCC expects that the draft template will be completed by the end of the first quarter of fiscal year 2020. Because the CCDF is serving as the pilot for the new template, OCC expects that the initial assessment of the program will be complete by the end of the third quarter of fiscal year 2020. Until ACF finalizes its template and conducts a risk assessment for the CCDF, ACF will not be able to develop a fraud risk profile for the CCDF. The fraud risk profile is an essential piece of the antifraud strategy and informs the specific control activities managers design and implement. Although there is currently a process in place for ACF to develop a fraud risk assessment template, until ACF carries out the assessment of the CCDF and develops an associated fraud risk strategy, it will lack assurance that OCC’s program-integrity activities are suitable and targeted at prioritized fraud risks. Conclusions Both state lead agencies and OCC play an important role in overseeing and protecting the integrity of the CCDF program. However, OCC has not finalized written policies that describes how staff should implement or document the State Plan review and approval process, which is an important part of OCC’s oversight of the CCDF program. OCC’s lack of established written policies limits its ability to ensure that staff follow appropriate protocols when implementing the State Plan review and approval process, and limits its ability to retain organizational knowledge in the event of staff turnover, which OCC noted as occurring during each review period. In addition, most of the State Plans submitted to OCC for the fiscal years 2019–2021 grant period did not contain information on the results of their states’ program-integrity activities. OCC also has not defined or communicated what it considers to be the “results” of program- integrity activities for states, which are requested to include this information in State Plans, or for its staff who will be responsible for analyzing this information. Until OCC defines its informational needs regarding program-integrity activity results and communicates this information to the states and its own staff, OCC may continue to lack quality information to help ensure states’ accountability of their program- integrity activities. Further, OCC does not have documented criteria to guide the review of the CAPs to ensure the proposed corrective actions are aimed at root causes of improper payments and are effectively implemented to prevent and reduce improper payments. Without criteria for its staff to use in reviewing the CAPs, OCC does not have assurance that the corrective actions a state proposes to reduce improper payments will be specifically aimed at root causes of improper payments and effectively implemented, leaving the CCDF program at continued risk of improper payments. OCC also does not have written policies for its CAP follow-up process or documentation that follow-up has been completed for past CAPs. In addition, OCC officials told us that they plan to develop a written protocol for this process, but did not specify a timeline for completion. Having established written policies for the CAP follow-up process will help ensure that OCC’s oversight and monitoring of CAPs is carried out consistently. OCC’s Monitoring System process does not currently contain criteria to assess the effectiveness of states’ program-integrity control activities, including fraud-fighting activities. Without developing and documenting criteria to assess whether states’ program-integrity control activities are effective, OCC cannot ensure that such program-integrity control activities are effective. In addition, OCC does not plan to collect any data from its technical-assistance tools that could potentially help it to monitor and evaluate the effectiveness of states’ program-integrity activities. However, OCC has not evaluated the benefits of using these tools to collect information on program-integrity activities against any costs of doing so— such as the cost of seeking OMB approval to do so. By not evaluating the feasibility of collecting information from technical-assistance tools to monitor the effectiveness of states’ program-integrity control activities, OCC may be missing an opportunity to help states adapt control activities to improve their fraud risk management. All of the foregoing program-integrity oversight and monitoring activities could contribute to a strategy for managing fraud risks in the CCDF. However, OCC has not completed a fraud risk assessment or risk profile for the program. Although there is currently a process in place for ACF to develop a fraud risk assessment template, until ACF completes this template and carries out the assessment of the CCDF, it will lack a robust antifraud strategy and assurance that OCC’s current program-integrity activities are suitable and targeted at prioritized risk. Recommendations for Executive Action We are making the following nine recommendations, eight to the Director of OCC and one to the Assistant Secretary for ACF: The Director of OCC should establish internal written policies to effectively implement and document the State Plan review and approval process for future review and approval periods. (Recommendation 1) The Director of OCC should define the informational needs related to the results of program-integrity activities. (Recommendation 2) The Director of OCC should communicate externally to the states its informational needs related to the results of states’ program-integrity activities. (Recommendation 3) The Director of OCC should communicate internally to staff its informational needs related to the results of states’ program-integrity activities. (Recommendation 4) The Director of OCC should develop documented criteria to guide the review of CAPs submitted by states to ensure that proposed corrective actions are aimed at root causes of improper payments and are effectively implemented. (Recommendation 5) The Director of OCC should timely complete its effort to develop established written policies for the CAP follow-up process to ensure that OCC’s oversight and monitoring of CAPs is carried out consistently. (Recommendation 6) The Director of OCC should develop and document criteria to assess the effectiveness of states’ program-integrity control activities. (Recommendation 7) The Director of OCC should evaluate the feasibility of collecting information from the Grantee Internal Controls Self-Assessment Instrument (Self-Assessment Instrument) and Fraud Toolkit, such as during its Monitoring System process, to monitor the effectiveness of states’ program-integrity control activities. (Recommendation 8) The Assistant Secretary for ACF should ensure that ACF conducts a fraud risk assessment to provide a basis for the documentation and development of an antifraud strategy that describes the CCDF program’s approach to address prioritizing fraud risks identified. (Recommendation 9) Agency Comments We provided a draft of this report to HHS for review and comment. In its comments, reproduced in appendix I, HHS concurred with our recommendations. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Health and Human Services Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jonathon Oldmixon (Assistant Director), Erica Varner (Analyst in Charge), Yue Pui Chin, and Daniel Dye made key contributions to this report. Other contributors include James Ashley, Maria McMullen, George Ogilvie, and Sabrina Streagle.
Why GAO Did This Study The CCDF is administered as a block grant to the states by OCC, an agency within the Department of Health and Human Services (HHS). Recent reports by the HHS Office of the Inspector General show that OCC's monitoring of CCDF state program-integrity efforts remains a challenge. CCDF has also been designated as a program susceptible to significant improper payments, as defined by the Office of Management and Budget. GAO was asked to review CCDF program-integrity efforts. This report discusses, among other things, the extent to which OCC provides oversight of (1) states' CCDF program-integrity activities, including encouraging that all requested information is included within State Plans; and (2) improper-payment risks and relevant corrective actions in states' CCDF programs. GAO analyzed 51 approved CCDF State Plans, including from the District of Columbia, for the fiscal years 2019–2021 grant period. GAO also reviewed OCC policies and procedures and compared them to relevant laws, regulations, and Standards for Internal Control in the Federal Government , and interviewed relevant federal officials. What GAO Found The Child Care and Development Fund (CCDF) provided states more than $8 billion in federal funds in fiscal year 2019. The Office of Child Care (OCC) oversees the integrity of the CCDF, which subsidizes child care for low-income families. A key part of OCC's oversight includes reviewing and approving State Plans. OCC requested but did not require states to describe in their State Plans the results of their program-integrity activities, which describe the processes that states use to identify fraud risk. Further, OCC has not defined or communicated what information it considers to be the “results” of program-integrity activities to the states and its own staff. Without defining and communicating its informational needs, OCC may continue to lack quality information that could help ensure states' accountability over their program-integrity activities. OCC oversees states' improper payment risks through a process that includes a requirement for states to submit corrective action plans (CAP) when they estimate their annual payment error rates are at or above 10 percent. Since 2013, seven states have submitted 14 CAPs. These CAPs describe states' proposed actions for reducing improper payments. However, OCC does not have documented criteria to guide its review and approval of the CAPs to ensure the proposed corrective actions are aimed at root causes of improper payments and are effectively implemented. Without developing this guidance, OCC does not have assurance that proposed corrective actions are specifically aimed at root causes of improper payments, leaving the CCDF program at continued risk of improper payments. What GAO Recommends GAO is making nine recommendations, including that OCC define and communicate its informational needs on the results of states' program-integrity activities, and that OCC develop criteria to guide the review of CAPs to ensure that proposed corrective actions are aimed at root causes of improper payments and are effectively implemented. HHS concurred with our recommendations and provided technical comments, which GAO incorporated as appropriate.
gao_GAO-20-209T
gao_GAO-20-209T_0
Background The Freedom of Information Act establishes a legal right of access to government information on the basis of the principles of openness and accountability in government. Before FOIA’s enactment in 1966, an individual seeking access to federal records faced the burden of establishing a “need to know” before being granted the right to examine a federal record. FOIA established a “right to know” standard, under which an organization or person could receive access to information held by a federal agency without demonstrating a need or reason. The “right to know” standard shifted the burden of proof from the individual to a government agency and required the agency to provide proper justification when denying a request for access to a record. Any person, defined broadly to include attorneys filing on behalf of an individual, corporations, or organizations, can file a FOIA request. For example, an attorney can request labor-related workers’ compensation files on behalf of his or her client, and a commercial requester, such as a data broker who files a request on behalf of another person, may request a copy of a government contract. In response, an agency is required to provide the relevant record(s) in any readily producible form or format specified by the requester, unless the record falls within a permitted exemption that provides limitations on the disclosure of information. FOIA Amendments and Guidance Call for Improvements in How Agencies Process Requests Various amendments have been enacted and guidance issued to help improve agencies’ processing of FOIA requests. For example: The Electronic Freedom of Information Act Amendments of 1996 (1996 FOIA amendment) strengthened the requirement that federal agencies respond to a request in a timely manner and reduce their backlogged requests. Executive Order 13392, issued by the President in 2005, directed each agency to designate a senior official as its chief FOIA officer. This official was to be responsible for ensuring agency-wide compliance with the act. The chief FOIA officer was directed to review and report on the agency’s performance in chief FOIA officer reports. The OPEN Government Act, which was enacted in 2007 (2007 FOIA amendment), made the 2005 executive order’s requirement for agencies to have a chief FOIA officer a statutory requirement. It also required agencies to include additional statistics, such as more details on processing times, in their annual FOIA reports. The FOIA Improvement Act of 2016 (2016 FOIA amendment) addressed procedural issues, including requiring that agencies (1) make records available in an electronic format if they have been requested three or more times; (2) notify requesters that they have not less than 90 days to file an administrative appeal, and (3) provide dispute resolution services at various times throughout the FOIA process. Further, the act required OMB, in consultation with the Department of Justice, to create a consolidated online FOIA request portal that allows the public to submit a request to any agency through a single website. FOIA Request Process The 1996 FOIA amendment required agencies, including DHS, to generally respond to a FOIA request within 20 working days. Once received, the request is to be processed through multiple phases, which include assigning a tracking number, searching for responsive records, and releasing the records to the requester. In responding to requests, FOIA authorizes agencies to use nine exemptions to withhold portions of records, or the entire record. These nine exemptions can be applied by agencies to withhold various types of information, such as information concerning foreign relations, trade secrets, and matters of personal privacy. FOIA allows a requester to challenge an agency’s final decision on a request through an administrative appeal or a lawsuit. Agencies generally have 20 working days to respond to an administrative appeal. DHS Covers Many Areas of Government Information Created in 2003, DHS assumed control of about 209,000 civilian and military positions from 22 agencies and offices that specialize in one or more aspects of homeland security. By the nature of its mission and operations, the department creates and has responsibility for vast and varied amounts of information covering, for example, immigration, border crossings, law enforcement, natural disasters, maritime accidents, and agency management. According to its 2018 Chief FOIA Officer Report, DHS’s organizational structure consists of 24 offices, directorates, and components. FOIA requests are split between the department’s Privacy Office, which acts as its central FOIA office, and FOIA offices in the department’s component agencies. Three of the major operational components of DHS are: U.S. Citizenship and Immigration Services (USCIS) promotes an awareness and understanding of citizenship, and ensures the integrity of the nation’s immigration system. Its records include asylum application files and other immigration-related documents. Customs and Border Protection (CBP) secures the border against transnational threats and facilitates trade and travel through the enforcement of federal laws and regulations relating to immigration, drug enforcement, and other matters. The agency maintains records related to agency operations, activities, and interactions. Immigration and Customs Enforcement (ICE) promotes homeland security and public safety through the criminal and civil enforcement of federal laws governing border control, customs, trade, and immigration. It maintains information related to the law enforcement records of immigrants and detainees, as well as information pertaining to human trafficking/smuggling, gangs, and arrest reports. According to its 2018 Chief FOIA Officer Report, DHS and its component agencies reported that they processed 374,945 FOIA requests in fiscal year 2018—the most of any federal government agency. As of its 2018 report, the department had a backlog of 53,971 unprocessed requests— the largest backlog of any federal agency. DHS Implemented Six Key FOIA Requirements to Help Improve its FOIA Operations Amendments and guidance relating to FOIA call for agencies, including DHS, to implement key requirements aimed at improving the processing of requests. Among others, these requirements call for agencies to (1) update response letters, (2) implement tracking systems, (3) provide FOIA training, (4) provide records online, (5) designate chief FOIA officers, and (6) update and publish timely and comprehensive regulations. As we noted in our June 2018 report, DHS had implemented these six FOIA requirements. Update response letters: The FOIA amendments require that certain information be included in agency response letters. For example, if part of a FOIA request is denied, agencies are required to inform requesters that they may seek assistance from the FOIA public liaison of the agency or the National Archives and Records Administration’s Office of Government Information Services (OGIS); file an appeal to an adverse determination within a period of time that is not less than 90 days after the date of such adverse determination; and seek dispute resolution services from the FOIA public liaison of the agency or OGIS. DHS had updated its FOIA response letters to include this specific information, as required per the amendments. Implement tracking systems: DHS used commercial automated systems, as called for by various FOIA amendments and guidance, and had established telephone or internet services to assist requesters in tracking the status of a request. The department used modern technology (e.g., mobile applications) to inform citizens about FOIA. The commercial systems allowed requesters to submit a request and track the status of that request online. In addition, DHS developed a mobile application that allowed FOIA requesters to submit a request and check its status. The department’s FOIA tracking systems were compliant with requirements of Section 508 of the Rehabilitation Act of 1973 (as amended), which required federal agencies to make their electronic information accessible to people with disabilities. Provide FOIA training: DHS’ chief FOIA officer offered FOIA training opportunities to staff in fiscal years 2016 and 2017, as required by the 2016 FOIA amendments. Specifically, the department provided training in responding to, handling, and processing FOIA requests. Provide records online: DHS posted records online for three categories of information, agency final opinions and orders, statements of policy, and frequently requested orders as required by 2009 memorandums from both the President and the Attorney General. Designate chief FOIA officers: DHS designated its Chief Privacy Officer as its Chief FOIA Officer. This position was a senior official at the assistant secretary or equivalent level, as required by a 2005 executive order and the 2007 FOIA amendments. Update and publish timely and comprehensive regulations: Guidance from the Department of Justice Office of Information Policy (OIP) encourages agencies to, among other things, describe their dispute resolution process; describe their administrative appeals process; notify requesters that they have a minimum of 90 days to file an administrative appeal; include a description of unusual circumstances and restrictions on an agency’s ability to charge certain fees when FOIA’s times limits are not met; and update agency regulations in a timely manner (i.e., update regulations by 180 days after the enactment of the 2016 FOIA amendment). DHS had addressed these five requirements in updating its regulations, as called for in the 2016 FOIA amendment and in related OIP guidance. DHS Identified Methods for Backlog Reduction, but Still Had Fluctuations The Attorney General’s March 2009 memorandum called on agency chief FOIA officers to review all aspects of their agencies’ FOIA administration and report to Justice on steps that have been taken to improve FOIA operations and disclosure. Subsequent Justice guidance directed agencies that had more than 1,000 backlogged requests in a given year to describe their plans to reduce their backlogs. Beginning in calendar year 2015, these agencies were to describe how they had implemented their plans from the previous year and whether that had resulted in a backlog reduction. In June 2018, we reported that DHS received about 191,000 to about 326,000 requests per year—the most requests of any agency—for a total of 1,320,283 FOIA requests in fiscal years 2012 through 2016. Further, the department had a backlog ranging from 28,553 in fiscal year 2012 to 53,971 in fiscal year 2018. The total numbers of these requests and backlogs are shown in table 1. We also reported that DHS, in its chief FOIA officer reports from fiscal years 2012 to 2016, stated that it had implemented several methods to reduce backlogs. According to the reports, the DHS Privacy Office, which is responsible for oversight of the department’s FOIA program, worked with components to help address the backlogs. The reports noted that the Privacy Office sent monthly emails to component FOIA officers on FOIA backlog statistics, convened management meetings, conducted oversight, and reviewed workloads. Leadership met weekly to discuss the oldest pending requests, appeals, and consultations, and determined steps needed to process those requests. In addition, in 2018, we noted that several other DHS components reported implementing actions to reduce backlogs. CBP hired and trained additional staff, encouraged requesters to file requests online, established productivity goals, updated guidance, and used better technology. USCIS, the National Protection and Programs Directorate, and ICE increased staffing or developed methods to better forecast future workloads to ensure adequate staffing. ICE also implemented a commercial off-the-shelf web application, awarded a multimillion-dollar contract for backlog reduction, and detailed employees from various other offices to assist in the backlog reduction effort. Due to these efforts by the Privacy Office and other components, the backlog dropped 66 percent in fiscal year 2015, decreasing to 35,374 requests. Yet, despite the continued efforts, the backlog numbers increased again. According the 2018 Chief FOIA Officer’s report, the department ended 2018 with a backlog of 53,971 requests. DHS attributed these increases to several factors, including an increase in the number of requests received, the increased complexity and volume of responsive records for those requests, and the loss of staff needed to process the requests. In June 2018, we reported that one reason DHS was struggling to consistently reduce its backlogs is that it lacked documented, comprehensive plans that would provide a more reliable, sustainable approach to addressing backlogs. In particular, it did not have documented plans that described how it intended to implement best practices for reducing backlogs over time. These best practices, as identified by Justice’s OIP, included specifying how DHS would use metrics to assess the effectiveness of backlog reduction efforts and ensuring that senior leadership supports backlog reduction efforts. In our June 2018 report, we recommended that the department take steps to develop and document a plan that fully addresses best practices with regard to the reduction of backlogged FOIA requests. In response, DHS reported that it had initiated a department-wide compliance assessment and stated that it planned to use the results of the assessment to help guide it in identifying best practices and areas of improvement. As of this month (October 2019), the department stated that the draft plan is currently with the components for review and is pending clearance. Until it has a final plan that fully addresses best practices, DHS will likely continue to struggle to reduce its backlogs to a manageable level. This is particularly important, as the number and complexity of requests will likely increase over time. Duplication Exists in Certain Components’ Processing of Immigration Files Among the most frequent FOIA requests made to DHS are those for immigration files. These files usually contain various types of information pertaining to immigrants, including asylum applications, law enforcement records, and border crossing documents. As such, they may contain information and records that are generated by various DHS components or other agencies. In 2014, we reported that within DHS, three components—USCIS, CBP, and ICE—created most of the documents included in immigration files. USCIS was the custodian of the files, and all FOIA requests for such files were either initiated with, or referred to, USCIS for processing. Specifically, to process a FOIA request for an immigration file, the USCIS staff to whom the request was assigned first manually entered the requester’s data, such as a name and address, into USCIS’s FOIA system to establish a record of the request. Next, the staff retrieved and scanned the documents in the requested file and reviewed the documents. If all of the documents were generated by USCIS, the staff made redactions as needed, sent the documents to the requester, and closed out the request. Further, if the FOIA request covered files containing documents generated by CBP, then USCIS was able to process the request on the basis of an agreement to that effect with CBP. By having USCIS process such requests for CBP documents, the two components avoided duplication in their response to a FOIA request. In November 2014, however, we reported that USCIS and ICE did not have such an agreement for documents generated by ICE. Thus, the USCIS staff was to identify any such documents and make them available to ICE’s FOIA staff for their separate processing. In doing so, we noted that USCIS and ICE engaged in duplicative processing of FOIA requests for those immigration files containing documents related to law enforcement activities that were generated by ICE. Specifically, to facilitate ICE’s review of such files, USCIS staff transferred copies of the ICE-generated documents to a temporary electronic storage drive maintained by USCIS. ICE retrieved the documents, and the ICE staff then re-entered the data to create a new FOIA request in ICE’s FOIA processing system. The staff then proceeded with processing the requested documents, and released them to the requester—in essence, undertaking a new, and duplicate, effort to respond to the FOIA request. Figure 1 depicts the duplication that occurred in USCIS’s and ICE’s downloading and re-entering of data to respond to FOIA requests for immigration files. We noted that, up until April 2012, USCIS and ICE had an agreement whereby USCIS processed ICE’s documents contained in an immigration file. However, the components’ officials stated that, since that agreement ended, the components had not made plans to enter into another such agreement. According to ICE’s FOIA Officer, USCIS’s processing of ICE’s documents in immigration files was viewed as being too costly. Nonetheless, while there would be costs associated with USCIS processing ICE’s documents in immigration files, the potential existed for additional costs to be incurred in the continued duplicate processing of such files. Our work has noted that duplication exists when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. We concluded that the duplicate processing of a single FOIA request by USCIS and ICE staff contributed to an increase in the time needed to respond to a FOIA request for immigration files. Because USCIS did not send the immigration file to ICE until it had completed its own processing of the relevant documents— which, according to USCIS, took on average 20 working days—ICE usually did not receive the file to begin its own processing until the 20-day time frame for responding to a request had passed. We pointed out that re-establishing an agreement that allows USCIS to process ICE-generated documents included in requests for immigration files, to the extent that the benefits of doing so would exceed the cost, could enable the two components to eliminate duplication in their processes for responding to such a request. Further, it could help reduce the time needed by these components in responding to a request. Therefore, in November 2014, we recommended that DHS direct the Chief FOIA Officer to determine the viability of re-establishing the service- level agreement between USCIS and ICE to eliminate duplication in the processing of immigration files. We stressed that, if the benefits of doing so would exceed the costs, DHS should re-establish the agreement. We also reported on our finding and recommendation regarding duplicate processing in our reports and updates on fragmentation, overlap, and duplication, issued in 2015 through 2019. In response, DHS indicated that it was working on a system intended to address the duplication. Specifically, in August 2018, DHS’s Privacy Office Director of Correspondence/Executive Secretary stated that the Privacy Office was leading a working group in collaboration with the Office of the Chief Information Officer to develop requirements for a single information technology solution for processing incoming FOIA requests. The director added that DHS used three disparate systems to track, manage, and process FOIA requests and that moving USCIS and ICE to one processing solution should result in processing benefits and lower overall administrative costs. We continue to track DHS’s progress in implementing this recommendation. However, as of October 2019, DHS’s Privacy Office stated that these actions were still in progress. In conclusion, DHS has implemented a number of key FOIA practices. However, it does not have a comprehensive plan to address its FOIA backlog, nor has it yet addressed duplication in its FOIA process. Addressing both of these issues is important, as the number and complexity of requests will likely increase over time and DHS may be challenged in effectively responding to the needs of requesters and the public. Chairwoman Torres Small, Ranking Member Crenshaw, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. GAO Contact and Staff Acknowledgments If you or your staffs have any questions about this testimony, please contact Vijay A. D’Souza, Director, Information Technology and Cybersecurity, at (202) 512-6240 or dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this testimony include Neela Lakhmani and Anjalique Lawrence (assistant directors), Kara Epperson, Christopher Businsky, Nancy Glover, and Scott Pettis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study FOIA requires federal agencies to provide the public with access to government records and information based on the principles of openness and accountability in government. Each year, individuals and entities file hundreds of thousands of FOIA requests. DHS continues to receive and process the largest number of FOIA requests of any federal department or agency. For fiscal year 2018, over 40 percent of federal FOIA requests (about 396,000) belonged to DHS. GAO was asked to summarize its November 2014 and June 2018 reports which addressed, among other things, (1) DHS's methods to reduce backlogged FOIA requests and (2) duplication in DHS's processing of FOIA requests. In conducting this prior work, GAO evaluated the department's and components' FOIA policies, procedures, reports, and other documentation; and interviewed agency officials. GAO also followed up on its recommendations to determine their implementation status. What GAO Found The Department of Homeland Security's (DHS) responsibilities for processing Freedom of Information Act (FOIA) requests are split between the department's Privacy Office, which acts as its central FOIA office, and FOIA offices in the department's component agencies, such as U.S. Citizenship and Immigration Services and Immigration and Customs Enforcement. In 2018, GAO reported that DHS had implemented several methods to reduce backlogged FOIA requests, including sending monthly emails to its components on backlog statistics and conducting oversight. In addition, several DHS components, implemented actions to reduce their backlogs. Due to efforts by the department, the backlog dropped 66 percent in fiscal year 2015, decreasing to 35,374 requests. Although there was initial progress by the end of fiscal year 2015, the number of backlogged requests increased in fiscal years 2016 and 2018 (see figure). One reason DHS was struggling to consistently reduce its backlogs is that it lacked documented, comprehensive plans that would provide a more reliable, sustainable approach to addressing backlogs and describe how it will implement best practices for reducing backlogs over time. DHS attributed the increase in its FOIA backlogs to several factors, including the increased numbers and complexity of requests received and the volume of responsive records for those requests. Until it develops a plan to implement best practices to reduce its backlogs, DHS will likely continue to struggle to reduce the backlogs to a manageable level. In addition, in 2014 GAO reported that certain immigration-related requests were processed twice by two different DHS components. The duplicate processing of such requests by the two components contributed to an increase in the time needed to respond to the requests. GAO continued to report this issue in its 2019 annual product on opportunities to reduce fragmentation, overlap, and duplication. What GAO Recommends In its prior reports, GAO made five recommendations to DHS. These included, among other things, that DHS (1) take steps to develop and document a plan that fully addressed best practices with regard to reducing the number of backlogged FOIA requests and (2) eliminate duplicative processing of immigration-related requests. The department agreed with the recommendations. However, as of October 2019, DHS had not fully implemented all of them.
gao_GAO-19-344
gao_GAO-19-344_0
Background Overview of the Military Justice System According to the 2015 report ordered by the Secretary of Defense and issued by the Military Justice Review Group, the military justice system is designed to ensure discipline and order in the armed forces, since crimes committed by servicemembers have the potential to destroy the bonds of trust, seriously damage unit cohesion, and compromise military operations. The jurisdiction of the UCMJ extends to all places and applies to all active-duty servicemembers. UCMJ jurisdiction applies to other individuals as well, such as members of the National Guard or reserves who are performing active-duty service; retired members who are entitled to pay or are receiving hospitalization in a military hospital; prisoners of war in custody of the armed forces; persons serving with or accompanying the armed forces in the field in time of declared war or contingency operations, such as contractors; and members of organizations such as the National Oceanic and Atmospheric Administration and the Public Health Service when assigned to and serving with the armed forces. In creating the military justice system, Congress established three types of military courts, called courts-martial: summary, special, and general. Each of these types respectively is intended to deal with progressively more serious offenses, and each court-martial type may adjudicate more severe maximum punishments as prescribed under the UCMJ. In addition, an accused servicemember can receive nonjudicial punishment under Article 15 of the UCMJ, by which a commander can punish a servicemember without going through the court-martial process. Table 1 provides an overview of nonjudicial punishments and the three different types of courts-martial. The Military Justice Act of 2016 enacted significant reforms to the UCMJ, most of the provisions of which became effective on January 1, 2019. These reforms included changes such as limitations on the types of punishments permitted with nonjudicial punishments, changes to required size of the panel, or jury, and changes to what judicial outcomes are subject to automatic appeal. There are some areas where individual services supplement but remain consistent with the UCMJ. For example, the Air Force provides a right to counsel in certain forums where the services are not required to do so. In addition to the reforms affecting the UCMJ, the Military Justice Act of 2016 also directed changes to military justice data collection and accessibility. Specifically, section 5504 of the Military Justice Act of 2016 directed the Secretary of Defense to prescribe uniform standards and criteria pertaining to case management, data collection, and accessibility of information in the military justice system. As a result, the DOD Office of General Counsel authorized the establishment of the Article 140A Implementation Subcommittee of the Joint Service Committee on Military Justice to, among other things, assess each service’s case management system, recommend what data fields the services should collect, propose uniform definitions for the data fields the services should collect, and recommend standardized methods and data field definitions to improve the collection of data concerning race and ethnicity of individuals involved in the military justice system. The subcommittee conducted a study and submitted its recommendations to the Joint Service Committee Voting Group on July 2, 2018, and the Voting Group submitted a report and its agreed upon recommendations to the DOD Office of General Counsel on August 24, 2018. The Military Justice Act of 2016 provides that the Secretary of Defense was to carry out this mandate by December 23, 2018, and that the Secretary’s decisions shall take effect no later than December 23, 2020. On December 17, 2018, the General Counsel of the Department of Defense issued uniform standards and criteria, which directed that each military justice case processing and management system be capable of collecting uniform data concerning race and ethnicity. Military Justice Process From fiscal years 2013 through 2017, more than 258,000 active-duty servicemembers were disciplined for a violation of the UCMJ, out of more than 2.3 million unique active-duty servicemembers who served across all of the military services during this period. Figure 1 shows the number of cases of each type of court-martial and of nonjudicial punishments in each of the military services. There are several steps in the discipline of a servicemember who allegedly commits a crime under the UCMJ, which are summarized in figure 2 below. The military justice process begins once an offense is alleged and an initial report is made, typically to law enforcement, an investigative entity, or the suspect’s chain of command. Policies for initiating criminal investigations by military criminal investigative organizations (MCIO) and procedures for investigating criminal allegations are set forth in DOD and service guidance. At this time, the commanding officer or law enforcement will conduct an inquiry or investigation into the accusations and gather all reasonably available evidence. MCIOs have the authority and independent discretion to assume investigative jurisdiction, and do not require approval from any authority outside of the MCIO to conduct such an investigation—commanders outside of the organization are not to impede or interfere with such decisions or investigations by the MCIO. If an MCIO is involved in the inquiry, the investigative entity is to gather all reasonably available evidence and provide the commanding officer with unbiased findings that reflect impartiality as required by DOD instruction. According to service officials, during the conduct of the criminal investigation, the subject of the investigation has the right to obtain legal counsel at any time. After an investigation, the first step toward initiation of a court-martial is when the accused is presented with a list of charges signed by the accuser under oath, which is called preferral of charges; the accuser who prefers the charges may be anyone subject to the UCMJ. After charges are preferred, the charges are forwarded to an officer with sufficient legal authority to convene a court-martial, also known as the “convening authority.” The convening authority in receipt of preferred charges may, among other actions and depending on the nature of the charges and the level of the convening authority, refer the case to its own court or forward the case to a superior commander for disposition, for example, to a general court-martial convening authority. The general court-martial convening authority would have similar options: to dismiss the charges, refer them to a general or special court-martial, or take some lesser action. Before any case is referred to a general court-martial, the case must proceed through a preliminary hearing under Article 32 of the UCMJ, unless waived by the accused. The Article 32 hearing is presided over by an impartial judge advocate, or another individual with statutory authority, who is appointed by the convening authority and makes a recommendation to the convening authority. We analyzed general and special courts-martial that were preceded by investigations recorded in databases maintained by MCIOs, which we refer to as recorded investigations, and general and special courts-martial that did not have a record within an MCIO database. As shown in figure 3 below, the majority of general and special courts-martial, ranging from 53 percent to 74 percent across the services, had a recorded investigation, while the remaining cases would have been investigated by other sources, such as local civilian law enforcement, command investigations, or in the case of the Air Force, their military law enforcement forces. Once referred to a general or special court-martial, an accused servicemember may be tried by a military judge alone or by a military judge with a military jury, referred to as members of the court-martial. If the accused servicemember is tried by a military jury, the members of the court-martial determine whether the accused is proven guilty and, if the accused requests sentencing by the members, adjudicate a sentence. Otherwise, the military judge adjudicates the sentence. If the accused is tried by a military judge alone, the judge determines guilt and any sentence. In a summary court-martial, a single commissioned officer who is not a military judge adjudicates minor offenses and a sentence. Convictions at the general and special court-martial level are subject to a post-trial process and may be appealed to higher courts in cases where the sentence reaches a certain threshold. For example, depending on the forum and the adjudged sentence, the accused may be entitled to appellate review by the service Court of Criminal Appeals, and may be able to request or waive assignment of appellate defense counsel, or waive appellate review entirely. Depending, again, on forum and sentence, some cases that do not qualify for appellate review will receive review by a judge advocate to, among other things, determine that the court had jurisdiction and that the sentence was lawful. Some cases may then be further reviewed by the Court of Appeals for the Armed Forces, as well as by the U.S. Supreme Court at their discretion, if the case was reviewed by the Court of Appeals for the Armed Forces. The military justice system, like the civilian criminal justice system, provides avenues for accused servicemembers to raise allegations of discrimination, improprieties in investigations, improprieties in disposition, and improprieties in the selection of panel members at the court-martial proceeding, before a military judge and on appellate review. The Military Justice Act of 2016 requires that legal training be provided to all officers, with additional training for commanders with authority to take disciplinary actions under the UCMJ. Definitions of Race, Ethnicity, and Gender The Office of Management and Budget (OMB) has established standards for collecting, maintaining, and presenting data on race and ethnicity for all federal reporting purposes. These standards were developed in cooperation with federal agencies to provide consistent data on race and ethnicity throughout the federal government. OMB standards establish the following five categories of race: American Indian or Alaska Native: A person having origins in any of the original peoples of North and South America (including Central America), and who maintains tribal affiliation or community attachment. Asian: A person having origins in any of the original peoples of the Far East, Southeast Asia, or the Indian subcontinent including, for example, Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand, and Vietnam. Black or African American: A person having origins in any of the black racial groups in Africa. Native Hawaiian or Other Pacific Islander: A person having origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands. White: A person having origins in any of the original peoples of Europe, the Middle East, or North Africa. The OMB standards also establish two categories of ethnicity. Hispanic or Latino: A person of Cuban, Mexican, Puerto Rican, South or Central American, or other Spanish culture or origin, regardless of race. Not Hispanic or Latino: A person not having the above attributes. In addition to defining race and ethnicity for federal administrative reporting and record keeping requirements, OMB standards provide two methods for federal agencies to follow regarding the collection of data on race and ethnicity. 1. Separate questions shall be used for collecting information about race and ethnicity wherever feasible. In this case, there are 5 categories of race noted above which individuals can select, and individuals can identify with more than one category of race. In addition to race, individuals can select one of the two ethnicity categories above. 2. If necessary, a single question or combined format can be used to collect information about race and ethnicity, where the following categories are provided for individuals: American Indian or Alaska Native, Asian, Black or African American, Hispanic or Latino, Native Hawaiian or other Pacific Islander, and White. In this instance, individuals can also select more than one category. Information collected on servicemembers’ gender is governed by DOD guidance. DOD Instruction 1336.05 provides that information collected on a servicemember’s gender is based on reproductive function. It provides that there are three options that can be selected when inputting a servicemember’s gender: male, female, or unknown. Racial and Gender Disparities in the Civilian Justice System Racial and gender disparities in the civilian criminal justice system have been the subject of several studies in the past decade. While the civilian and military justice systems differ from each other, we reviewed information about racial and gender disparities in the civilian criminal justice system to enhance our understanding of the complexities of the issues, including how others had attempted to measure disparities. Some studies have assessed the rates at which minority groups are policed. For example, a Department of Justice study of data from the Bureau of Justice Statistics’ 2011 Police-Public Contact survey found that Black drivers were more likely than White or Hispanic drivers to be pulled over in a traffic stop; specifically, the study found that 10 percent of White drivers and 10 percent of Hispanic drivers were pulled over in a traffic stop, compared to 13 percent of Black drivers. This study also found that Black and Hispanic drivers were more likely to be searched once they were pulled over by the police; specifically, the study found that 2 percent of White drivers stopped by police were searched, compared to 6 percent of Black drivers and 7 percent of Hispanic drivers. In addition, U.S. government data shows that racial disparities exist among individuals who are arrested. For example, data from the Federal Bureau of Investigation’s Uniform Crime Reporting Program, which compiles data from law enforcement agencies across the country, indicates that in 2016, Black individuals represented 26.9 percent of total arrests nationwide, but comprised 13.4 percent of the U.S. population according to U.S. census data estimates as of July 1, 2017. This data also shows that 69.6 percent of all arrested individuals were White, while White individuals comprised 76.6 percent of the U.S. population. Studies have also identified racial and gender disparities in civilian justice sentencing. In 2010 and 2017, the U.S. Sentencing Commission reported that Black male offenders received longer sentences than similarly situated White male offenders. Specifically, in 2017, the Commission analyzed federal sentencing data and reported that Black male offenders received sentences that on average were 19.1 percent longer than similarly situated White males for fiscal years 2012 to 2016. This analysis controlled for factors such as type of offense, race, gender, citizenship, age, education level, and criminal history. This study also found that female offenders of all races received shorter sentences than White male offenders. Similarly, the Commission’s 2010 report found that Black offenders received sentences that were 10 percent longer than those imposed on White offenders from December 2007 through September 2009, and male offenders received sentences that were 17.7 percent longer than female offenders, after controlling for the same factors as noted for the 2017 study, among others. Finally, racial and gender disparities have been identified among incarcerated populations. According to data from the Bureau of Justice Statistics, for prisoners with sentences of 1 year or more under the jurisdiction of state or federal correctional officials in 2016, Black males were six times more likely to be imprisoned than White males, and Hispanic males were 2.7 times more likely to be imprisoned than White males. The racial disparities were more pronounced for younger males, where Black males aged 18 to 19 were approximately 11.8 times more likely than White males of the same age to be imprisoned. The Bureau also reported that Black females were imprisoned at approximately twice the rate of White females. We did not assess the methodologies used in any of these studies or the reliability of the data cited in the studies; these studies are discussed here to provide broader context for the discussion about racial and gender disparities in the military justice system. The Military Services Collect and Maintain Gender Information, but Do Not Collect and Maintain Consistent Information about Race and Ethnicity, Limiting Their Ability to Collectively or Comparatively Assess Data to Identify Any Disparities The military services collect and maintain gender information, but they do not collect and maintain consistent information about race and ethnicity in their investigations, military justice, and personnel databases. This limits the military services’ ability to collectively or comparatively assess these demographic data to identify any racial or ethnic disparities in the military justice system within and across the services. The military services use different databases to collect and maintain information for investigations, courts-martial, and nonjudicial punishments. All of the databases collect and maintain gender information, but the Coast Guard’s military justice database does not have the capability to query or report on gender data. While the military services’ databases collect and maintain complete data for race and ethnicity, the information collected and maintained about race and ethnicity is not consistent among the different databases within and across the services. Moreover, the Coast Guard, the Navy, and the Marine Corps do not collect and maintain complete and consistent servicemember identification data, such as social security number or employee identification number, in their respective military justice databases, although DOD leadership recently directed improvements in this area. Finally, the military services do not report data that provides visibility into disparities in the military justice system, and DOD and the services lack guidance about when potential racial, ethnic, or gender disparities should be further reviewed, and what steps should be taken to conduct such a review if needed. The Military Services Use Different Databases to Collect and Maintain Information for Investigations, Courts- Martial, and Nonjudicial Punishments Each military service uses a different database to collect and maintain information on investigations and courts-martials, and, in some services, nonjudicial punishments, as shown in figure 4. For three of the military services—the Army, the Navy, and the Coast Guard—the databases listed in figure 4 include information about some, but not all, of their nonjudicial punishment cases. Additionally, the nature of the information collected by each of the services’ databases varies, as noted below. Investigations. The Army collects and maintains information on investigations conducted by the Army Criminal Investigation Command in the Army Law Enforcement Reporting and Tracking System database. According to Army officials, the Office of the Provost Marshal General and the Army Criminal Investigation Command developed this database to replace a 2003 system, the Army Criminal Investigation and Intelligence System, and a significant part of the military police’s 2002 system, the Centralized Operations Police Suite. The officials said that the Army Law Enforcement Reporting and Tracking System has been operational since 2015, and has become the primary case management system for all Army law enforcement professionals. However, Army officials said that cases involving commander-led investigations are unlikely to be recorded in this database. Courts-martial and nonjudicial punishments. The Army uses Military Justice Online and the Army Courts-Martial Information System to collect data on court-martial cases. According to Army officials, Military Justice Online, created in 2008, is a document- generating system that primarily is used by the Army’s judge advocate general corps and promotes uniformity in case processing among the Army’s staff judge advocate offices. Military Justice Online includes information about courts-martial, some nonjudicial punishments, administrative separations, and administrative reprimands of servicemembers. Army officials said that the Army Courts-Martial Information System, which has been used since 1989, serves as the Army trial judiciary’s case tracking system and is used by the Army’s trial judiciary to track court-martial cases. Investigations. The Air Force military criminal investigative organization, the Office of Special Investigations, uses a system called the Investigative Information Management System to collect and maintain information related to investigations. According to Air Force officials, the Investigative Information Management System has been in use since 2001. Courts-martial and nonjudicial punishments. The Air Force uses the Automated Military Justice Analysis and Management System, which is designed to be a case management system to collect comprehensive information for both court-martial cases and nonjudicial punishments. According to Air Force officials, the Automated Military Justice Analysis and Management System has been in use since 1974. Investigations. According to Navy officials, the Navy and Marine Corps’ joint system for maintaining and collecting information related to investigations is the Consolidated Law Enforcement Operations Center, which has been in use since 2004. Navy officials said that this database initially contained information regarding Navy and Marine Corps law enforcement incidents and criminal investigations, but began to include investigations conducted by the Naval Criminal Investigative Service in 2012. Courts-martial. The Navy and the Marine Corps both use the Case Management System to collect and maintain information about military justice matters with involvement by a Navy or Marine Corps legal office, including special and general court-martial cases. This system was initially developed by the Marine Corps to track information about legal services provided by their legal offices. According to Navy and Marine Corps officials, the system has been in use by the Marine Corps since 2010 and by the Navy since 2013. Officials from the Marine Corps said that although the Case Management System has been in use since 2010, the system was not widely used until 2012. Nonjudicial punishments. The Marine Corps Total Force System, the Marine Corps personnel database, collects and maintains information on summary courts-martial and nonjudicial punishments for cases where there was a conviction or punishment. According to Marine Corps officials, this system has been in use since 1995. Navy officials said that their personnel database records information about nonjudicial punishments if the punishment involved a change in pay or grade. The services’ military justice Case Management System includes information on some nonjudicial punishment cases in the Navy and the Marine Corps, which Navy and Marine Corps officials said was for those cases that had involvement by their legal offices. Investigations. The Coast Guard Investigative Service uses the Field Activity Case Tracking System to collect and maintain information on servicemembers investigated for violations of the UCMJ. According to Coast Guard officials, this system has been in use since July 2014. Courts-martial. According to Coast Guard officials, the Coast Guard uses Law Manager to collect and maintain administrative information on court-martial cases. Law Manager has been in use since 2000, but was not used for court-martial data until 2003. Nonjudicial punishments. Coast Guard officials said that their military justice database contains records of nonjudicial punishments if a case involved their legal offices. In addition, according to Coast Guard officials, Direct Access, the Coast Guard’s personnel database, also collects and maintains information about some court-martial cases and nonjudicial punishments if the punishment resulted in a change in rank or pay or an administrative action against the accused servicemember. The Military Services Collect and Maintain Gender Data, but the Coast Guard Can Not Query or Report on Gender Data from its Military Justice Database All of the military services collect and maintain gender information in their investigations, military justice, and personnel databases, but are inconsistent in whether they allow an unknown or unspecified gender, and the Coast Guard’s military justice database does not allow Coast Guard officials to query or report on gender data. Table 2 below summarizes how data regarding the servicemember’s gender is entered into the services’ databases and the number of potential gender options. Each database identifies at least two potential options—male and female—for data related to the servicemember’s gender, while about half of the databases (8 of 15) provide a third option to indicate that the gender is either unknown or not specified. Each of the military services’ investigations, military justice, and personnel databases maintained gender data for almost 100 percent of servicemembers, except we were unable to determine this completion rate for the Coast Guard’s military justice database. We could not determine the completeness of the Coast Guard’s gender data in its military justice database because, as previously noted, its military justice database does not have the capability to query on gender data. Standards for Internal Control in the Federal Government states that management should use quality information and obtain data on a timely basis so they can be used for effective monitoring. However, the Coast Guard does not have visibility over the gender of servicemembers prosecuted for UCMJ violations without merging data from multiple databases, which can be a labor-intensive and time-consuming process. According to Coast Guard officials, information regarding the gender of servicemembers prosecuted for UCMJ violations can be recorded in its military justice database, but gender is not a field that can be searched on or included in the reports they run using information from their military justice database, because of the way the military justice module in the database was designed. Coast Guard officials told us that the military justice database—Law Manager—was designed to determine the status of court-martial cases, and captures attributes that are generated by relevant UCMJ documents. Those official documents do not require the annotation of demographics such as gender, so this information is not used in Law Manager. A Coast Guard official indicated that it would be feasible to modify Law Manager to make it easier to run reports and queries that include gender information. The ability to query and report on the gender of servicemembers in its military justice database would provide the Coast Guard with more readily available data to identify or assess any gender disparities that may exist in the investigation and trial of military justice cases. The Military Services Do Not Collect and Maintain Consistent Data for Race and Ethnicity Each of the military services’ databases collect and maintain complete data for race and ethnicity, but the military services do not collect and maintain consistent information regarding race and ethnicity in their investigations, military justice, and personnel databases. Additionally, the military services have not developed a mechanism to aggregate the data into consistent categories of race and ethnicity to allow for efficient analysis and reporting of consistent demographic data. The number of potential responses for race and ethnicity within the 15 databases across the military services ranges from 5 to 32 options for race and 2 to 25 options for ethnicity, which can complicate cross-service assessments. For example, the Army’s personnel database maintains 6 options for race and 23 options for ethnicity, whereas the Coast Guard’s personnel database maintains 7 options for race and 3 for ethnicity. Table 3 summarizes how the databases used by the military services vary in how the servicemember’s race is entered and the number of potential race options. Table 4 shows that the military services’ databases also vary in how information about servicemembers’ ethnicity is entered into the databases and the number of potential ethnicity options that are collected. Although the data collected and maintained was not consistent within and across the military services, each of the military services’ databases maintained race and ethnicity data for at least 99 percent of the servicemembers, with the exception of the Coast Guard. The Coast Guard does not track information about race or ethnicity in its military justice database. Coast Guard officials stated that this is because Law Manager was designed to determine the status of court-martial cases, and captures attributes that are needed to generate relevant UCMJ documents, such as court pleadings. Demographic information such as race and ethnicity is not included in these official documents, so this information is not input into Law Manager. Further, four of the databases we reviewed—including both of the Army’s military justice databases, and the Navy and the Marine Corps’ military justice databases—collect information on race and ethnicity in a combined data field as shown in table 4, whereas the other databases collect and maintain race and ethnicity information in two separate fields. Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives. Among other things, attributes of this internal control principle call for management to identify information requirements; obtain relevant data from reliable sources that are reasonably free from error; ensure that the data it receives is timely and reliable; and process the data obtained into quality information— information that is appropriate, current, complete, and accurate. In addition, federal internal control standards call for management to design the entity’s information system and related control activities to achieve objectives and respond to risks, thereby enabling information to become available to the entity on a timelier basis. Further, the Military Justice Act of 2016 required the Secretary of Defense to prescribe uniform standards and criteria for various items, including data collection and analysis for case management at all stages of the military justice system, including pretrial, trial, post-trial, and appellate processes, by December 2018. On December 17, 2018, the General Counsel of the Department of Defense issued the uniform standards and criteria required by article 140a of the Military Justice Act of 2016. As part of these uniform standards, the services were directed to collect data related to race and ethnicity in their military justice databases, and to collect racial and ethnic data in separate data fields. The standards provide that the services may have their military justice databases capture expanded ethnic or racial categories; however, for reporting purposes, expanded categories will aggregate to those categories listed in the standards. For race, the services will choose from six designations: (1) American Indian/Alaska Native, (2) Asian, (3) Black or African American, (4) Native Hawaiian or Other Pacific Islander, (5) White, or (6) Other. For ethnicity, the services will choose from two options: (1) Hispanic or Latino, or (2) Not Hispanic or Latino. These categories are consistent with the OMB standards for collecting and presenting such data. The military services are to implement the Secretary’s direction no later than December 23, 2020. However, DOD has applied these newly issued standards only to the military justice databases and not to the investigations and personnel databases. DOD officials stated that the investigations and personnel databases do not fall under the charter of the DOD General Counsel, which issued the standards for the military justice databases. Hence, these uniform standards do not apply to the military services’ investigations and personnel databases. We were able to analyze data across the investigations, military justice, and personnel databases by merging data from these databases, but this took multiple, detailed steps and would not be an efficient approach for routine analyses. Taking steps to develop the capability to present the race and ethnicity data in the military services’ personnel and investigations databases using the same categories included in the December 2018 standards for the military justice databases would allow for more efficient analysis of consistent demographic data. This could be done through either collecting and maintaining race and ethnicity data in the investigations and personnel databases using the December 2018 uniform standards or developing a capability to aggregate the data into the race and ethnicity categories included in the standards. The Navy, the Marine Corps, and the Coast Guard Did Not Collect and Maintain Complete Servicemember Identification Data, but Improved Collection Has Been Directed The Navy, the Marine Corps, and the Coast Guard did not collect and maintain complete servicemember identification data, such as social security number or employee identification number, in their military justice or investigations databases; however, DOD recently directed them to do so. In the course of conducting our analysis, in some instances, we could not match personnel records with military justice records because the social security number or employee identification number in the military justice database did not match the information in the personnel database. In other instances, we could not match personnel records with military justice records because the military justice records did not contain a social security number or employee identification number to match with information found in their personnel record. As shown in table 5, we initially were unable to match 5 percent of Navy military justice cases, 12 percent of Marine Corps military justice cases, 18 percent of Coast Guard investigation cases, and 6 percent of Coast Guard military justice cases. On December 17, 2018, the General Counsel of the Department of Defense issued the uniform standards and criteria required by article 140a of the Military Justice Act of 2016. As part of these uniform standards, the services were directed to collect either the social security number or DOD identification number in their military justice databases. The military services are to implement the Secretary’s direction no later than December 23, 2020. The Military Services Do Not Consistently Report Data that Provides Visibility into Any Disparities, and DOD Has Not Identified When Disparities Should Be Examined Further Although some military services report demographic information about the subjects of military justice actions internally, the military services do not externally report data that provides visibility into, or would enable an analysis of, the extent of racial, ethnic, or gender disparities in the military justice system. Service officials from all of the military services told us that they compile internal quarterly or monthly staff judge advocate reports, which include the total number of each type of court-martial handled by their legal offices and of nonjudicial punishments. According to service officials, in the Air Force and the Army these reports include demographic information about servicemembers involved in these cases, such as the total number of each type of case broken out by the subject’s race, ethnicity, or gender, but the Navy, Marine Corps, and Coast Guard reports do not include this demographic information, and there is no requirement to do so. Regarding external reporting, the UCMJ directs the Court of Appeals for the Armed Forces, the Judge Advocates General, and the Staff Judge Advocate to the Commandant of the Marine Corps to submit annual reports on the military justice system to the Congressional Armed Services Committees, the Secretary of Defense, the secretaries of the military departments, and the Secretary of Homeland Security. These reports are to include information on the number and status of pending cases handled in the preceding fiscal year, among other information. The annual reports include the total number of cases each service handled for each type of court-martial and for nonjudicial punishments. However, these annual reports do not include demographic information about servicemembers who experienced a military justice action, such as breakdowns by race or gender, because the reporting requirement does not direct the services to include such information. A DOD official expressed concern about expanding the reporting requirement to have public dissemination of race, ethnicity, and gender information due to the potential for misinterpretation, but stated that such reporting requirements for internal use would be beneficial. However, Congress and members of the public have expressed an interest in this information. Standards for Internal Control in the Federal Government state that management should externally communicate the necessary quality information to achieve the entity’s objectives. Furthermore, these standards state that management should use quality information to make informed decisions and evaluate the entity’s performance. According to DOD guidance, the Joint Service Committee on Military Justice, a committee comprised of representatives from each service’s legal office, is responsible for reviewing the Manual for Courts-Martial and the UCMJ on an annual basis. The Joint Service Committee can consider suggested changes to the UCMJ or the Manual for Courts-Martial or its supplementary materials from the services or from the general public. The Joint Service Committee then determines whether to propose any desired amendments to the UCMJ, or the Manual for Courts-Martial or its supplementary materials. If the Joint Service Committee finds that an amendment to either the Manual for Courts-Martial or the UCMJ is required, the committee will provide the General Counsel of DOD with a draft executive order containing the recommended amendments or will forward a legislative proposal to amend the UCMJ. While it is unclear whether the committee has ever considered or proposed an amendment to the UCMJ or Manual for Courts-Martial that would require the external reporting on an annual basis of demographic information about the race, ethnicity, and gender of servicemembers charged with violations of the UCMJ, no such change has been made. Reporting this information would provide servicemembers and the public with greater visibility into potential disparities and help build confidence that DOD is committed to a military justice system that is fair and just. Furthermore, DOD has not issued guidance that establishes criteria to specify when any data indicating possible racial, ethnic, or gender disparities in the investigations, trials, or outcomes of cases in the military justice system should be further reviewed, and to describe what steps should be taken to conduct such a review if it were needed. GAO’s Standards for Internal Control in the Federal Government provides that an agency needs to establish a baseline in order to perform monitoring activities. The baseline helps the agency understand and address deficiencies in its operations. While equal employment opportunity enforcement is a very different context than the military justice system, other federal agencies have developed such criteria in the equal employment opportunity context that can indicate when disparities should be examined further. For example, the Department of Justice, the Department of Labor, the Equal Employment Opportunity Commission, and the Office of Personnel Management use a “four-fifths” test to determine when differences between subgroups in the selection rates for hiring, promotion, or other employment decisions are significant. These criteria, though inexact, provide an example of the type of criteria that DOD could consider using as a basis for determining when disparities among racial or gender groups in the military justice process could require further review or analysis. By issuing guidance that establishes criteria for determining when data indicating possible racial and gender disparities in the investigations, trials, or outcomes of cases in the military justice system should be further examined, and describes the steps that should be taken to conduct such further examination, DOD and the services would be better positioned to monitor the military justice system to help ensure that it is fair and just, a key principle of the UCMJ. Racial and Gender Disparities Exist in Military Justice Investigations, Disciplinary Actions, and Case Outcomes, but Have Not Been Comprehensively Studied to Identify Causes Racial and gender disparities exist in investigations, disciplinary actions, and punishment of servicemembers in the military justice system, and gender disparities exist in convictions in the Marine Corps. Our analysis of available data from fiscal years 2013 through 2017, which controlled for attributes such as race, gender, rank, education, and years of service, found racial and gender disparities were more likely in actions that first brought servicemembers into the military justice system. Specifically, we found that: Black, Hispanic, and male servicemembers were more likely than White and female servicemembers to be the subjects of recorded investigations in all of the military services, and were more likely to be tried in general and special courts-martial in the Army, the Navy, the Marine Corps, and the Air Force. There were fewer statistically significant racial and gender disparities in most military services in general and special courts-martial that were preceded by a recorded investigation than in general and special courts-martial overall. We also found that statistically significant racial and gender disparities in general and special courts-martial that did not follow a recorded investigation were similar to those we identified for general and special courts-martial overall. Black and male servicemembers were more likely than White and female servicemembers to be tried in summary courts-martial and to be subjects of nonjudicial punishment in the Air Force and the Marine Corps. The Army and the Navy did not maintain complete data, and the Coast Guard had too few summary courts-martial for us to analyze, and did not maintain complete nonjudicial punishment data. We identified fewer statistically significant racial or gender disparities in case outcomes—convictions and punishment severity. Specifically: Race was not a statistically significant factor in the likelihood of conviction in general and special courts-martial in the Army, the Navy, the Marine Corps, and the Air Force, but gender was a statistically significant factor in the Marine Corps. Black servicemembers were less likely to receive a more severe punishment in general and special courts-martial compared to White servicemembers in the Navy but there was no statistically significant difference for Black servicemembers in the Marine Corps, the Army, and the Air Force. Additionally, there were no statistically significant differences for Hispanic servicemembers in the Navy, the Marine Corps, the Army, or the Air Force; and males were more likely than females to receive a more severe punishment in the Marine Corps, the Army, and the Air Force. Finally, DOD and the military services have taken some steps to study racial and gender disparities in the military justice system over the last several decades, but they have not comprehensively studied the extent or causes of any disparities. Black, Hispanic, and Male Servicemembers Were More Likely to Be Subjects of Recorded Investigations and Tried in General and Special Courts-Martial Black, Hispanic, and Male Servicemembers Were More Likely to Be Subjects of Recorded Investigations in All of the Military Services Black, Hispanic, and male servicemembers were more likely than White or female servicemembers to be the subjects of recorded investigations in all of the military services, after controlling for other attributes, as shown in figure 5. Servicemembers in the Other race category were more likely than White servicemembers to be the subjects of recorded investigations in the Navy, but were less likely in the Army. Our analyses did not identify any statistically significant differences for servicemembers in the Other race category from the Air Force, the Marine Corps, or the Coast Guard. For the Army, the Navy, the Marine Corps, and the Air Force, Black, Hispanic, and male servicemembers were more likely than White and female servicemembers to be tried in general and special courts-martial after controlling for other attributes, as shown in figure 6 below. Servicemembers in the Other race category were more likely than White servicemembers to be tried in general and special courts-martial in the Navy, but we found no statistically significant differences in the likelihood of servicemembers in the Other race category in the Army, the Marine Corps, and the Air Force to be tried in general and special courts-martial compared to White servicemembers. We could not analyze Coast Guard cases due to the small number of general and special courts- martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. More Statistically Significant Racial and Gender Disparities Found in General and Special Courts-Martial Cases without a Recorded Investigation than with a Recorded Investigation When separating general and special court-martial cases into those that either were or were not preceded by an investigation recorded in an MCIO database, we found fewer statistically significant racial and gender disparities in most of the military services in general and special courts- martial that were preceded by a recorded investigation. However, statistically significant racial and gender disparities were also present in general and special courts-martial that did not follow a recorded investigation in all services included in this analysis, which would include cases where the investigation was performed by the servicemember’s command. Specifically, as shown in figure 7 below, we found that Black, Hispanic, Other, and male servicemembers in the Army, Hispanic servicemembers in the Marine Corps, and males in the Air Force were more likely than White or female servicemembers to be tried in general and special courts- martial following a recorded investigation, after controlling for other attributes. We found no statistically significant differences in the likelihood of any other racial or gender groups to be tried in general and special courts-martial following a recorded investigation in any other services. Our analyses of general and special courts-martial with a recorded investigation generally found fewer statistically significant differences compared to the results of our analyses for all special and general courts martial. We also found that Black and male servicemembers in all of the military services were more likely than White and female servicemembers to be tried in general and special courts-martial without a recorded investigation after controlling for other attributes, as shown in figure 8 below. Further, Hispanic servicemembers in the Army were more likely than White servicemembers to be tried in general and special courts-martial without a recorded investigation, but we found no statistically significant differences in the likelihood of Hispanic servicemembers to be tried in general and special courts-martial without a recorded investigation in the Marine Corps, the Navy, or the Air Force. We found no statistically significant differences in the likelihood of servicemembers in the Other race category to be tried in general and special courts-martial compared to White servicemembers in all of the military services. Our findings of racial and gender disparities in general and special courts-martial without a recorded investigation found statistically significant differences for Black and male servicemembers consistent with the differences we identified for general and special courts-martial overall, as shown in figure 6 above. Black and Male Servicemembers Were More Likely to Be Subject to Summary Courts- Martial and Nonjudicial Punishment in the Air Force and Marine Corps, and the Other Services Lack Data Black and Male Servicemembers Were More Likely to Be Tried in Summary Courts-Martial in the Air Force and Marine Corps, and the Army and Navy Lack Data Black and male servicemembers were more likely than White or female servicemembers to be tried in summary courts-martial in the Air Force and the Marine Corps after controlling for other attributes, as shown in figure 9 below. We did not identify any statistically significant differences in summary courts-martial rates for servicemembers who identified as Hispanic or in the Other race category in either the Air Force or the Marine Corps. We could not determine whether there were racial or gender disparities for summary courts-martial in the Army, the Navy, and the Coast Guard due to data limitations. We could not analyze Coast Guard cases due to the small number of summary courts-martial adjudicated in the Coast Guard from 2013 through 2017. We could not determine whether disparities existed among servicemembers tried in summary courts-martial in the Army and the Navy because the Army and the Navy did not collect complete summary courts-martial data in their investigations, military justice, or personnel databases. Specifically, as part of our data reliability checks, we identified the total number of summary courts-martial that the Army and the Navy reported in the Court of Appeals for the Armed Forces annual reports for fiscal years 2013 through 2017, and compared these totals to the number of cases we identified in their military justice databases. While our comparisons are not exact, due to differences in the dates we used to count the number of cases, we found that approximately 60 percent of the Army’s reported summary courts-martial cases and less than 50 percent of the Navy’s reported summary courts-martial cases were included in their military justice databases. Army and Navy officials cited several reasons why complete summary courts-martial information was not collected. First, they said that the services are not required to collect and maintain complete data on summary courts-martial because these cases result in non-criminal convictions under the UCMJ. Summary courts-martial are typically used for minor offenses, and the accused is not guaranteed the right to be represented by a military attorney. As a result, military attorneys may not be involved in summary courts-martial. Army and Navy officials said that if military attorneys are not involved in the case, there is not likely to be a record of the case in their service’s military justice database. In contrast, Air Force officials said that they provide a military attorney to represent the accused in summary courts-martial; as a result, Air Force officials said their attorneys create records for these cases in the Air Force’s military justice database. The Marine Corps does not maintain summary court- martial data in its military justice database but tracks summary courts- martial in its personnel database. Officials in the Navy and the Army told us that the lack of complete summary court-martial data in their military justice databases is also in part because these systems were not designed to serve as repositories for complete military justice data. Instead, the officials said that the military justice databases were primarily created to assist attorneys in generating trial documents, meeting timeframes, and other aspects of case management. Nevertheless, Army officials said they plan to start collecting more complete summary court-martial information. Specifically, Army officials said that the Army is encouraging their judge advocate general staff to create records for all summary courts-martial in the service’s military justice database. The absence of complete summary court-martial data in the military justice databases of the Army and the Navy limits these services’ visibility into any disparities that may exist among servicemembers involved in these types of military justice proceedings. On December 17, 2018, the General Counsel of the Department of Defense issued the uniform standards and criteria required by article 140A of the Military Justice Act of 2016. As part of these uniform standards, the services were directed to collect certain information about all cases in their military justice databases, which a DOD official said includes summary courts-martial cases. The military services are to implement the Secretary’s direction no later than December 23, 2020. Black and Male Servicemembers Were More Likely to Be Subject to Nonjudicial Punishments in the Air Force and the Marine Corps, and the Army, Navy, and Coast Guard Lack Data Black and male servicemembers were more likely than White or female servicemembers to be subject to nonjudicial punishments in the Air Force and the Marine Corps, after controlling for other attributes, as shown in figure 10 below. In the Air Force, we found that Hispanic servicemembers were more likely than White servicemembers to receive nonjudicial punishments, while we observed no statistically significant differences in nonjudicial punishment rates for Hispanic servicemembers in the Marine Corps. Servicemembers in the Other race category in the Marine Corps were less likely to receive nonjudicial punishments, but we observed no statistically significant differences in nonjudicial punishment rates for servicemembers in the Other race category in the Air Force. However, we could not determine whether there were racial or gender disparities among servicemembers subject to nonjudicial punishments in the Army, the Navy, and the Coast Guard because these services do not collect complete nonjudicial punishment data, such as data on the servicemember’s race, ethnicity, gender, offense, and punishment, in any of their databases. As part of our data reliability checks, we identified the total number of nonjudicial punishments that the Army, the Navy, and the Coast Guard reported in the Court of Appeals for the Armed Forces annual reports for fiscal years 2013 through 2017, and compared these totals to the number of cases we identified in their military justice and personnel databases. As shown in figure 11 below, we found that 65 percent of the Army’s reported nonjudicial punishments, 8 percent of the Navy’s reported nonjudicial punishments, and 82 percent of the Coast Guard’s reported nonjudicial punishments were recorded in their military justice databases. Officials from these services cited several reasons why they did not have complete information about all nonjudicial punishments. First, they said that the services are not required to track nonjudicial punishment cases because they are non-criminal punishments that are typically imposed for less serious offenses. Army and Navy officials noted that complete records of these punishments are not recorded at least in part because nonjudicial punishments are not meant to follow servicemembers throughout their career, but instead are intended to incentivize servicemembers to correct their behavior. Because nonjudicial punishments are not criminal punishments, the process afforded to servicemembers in nonjudicial punishment proceedings differs as well. For example, the servicemember is not guaranteed the right to representation by a military attorney. Army and Navy officials noted that their military justice databases contain records of nonjudicial punishments if there was legal involvement by the Judge Advocate General’s Corps in the case. Similarly, Coast Guard officials said that their military justice database contains records of nonjudicial punishment if a case originated as a criminal case involving a judge advocate, for example, if charges were preferred. According to Air Force and Marine Corps officials, the Air Force maintains complete nonjudicial punishment data in its military justice database, and the Marine Corps maintains complete nonjudicial punishment data in its personnel database. Standards for Internal Control in the Federal Government state that management should use quality information to achieve an entity’s objectives. Additionally, management should identify information requirements; ensure that the data it receives are timely and reliable; and process the data obtained into quality information. Officials from the Army, the Navy, and the Coast Guard expressed concerns regarding the feasibility of collecting and maintaining information about all nonjudicial punishments. Army officials stated that the collection and maintenance of all nonjudicial punishment data would be a substantial administrative burden due to the number of nonjudicial punishments awarded to servicemembers every week. Navy officials also stated that it would be a significant challenge to collect and maintain information about all nonjudicial punishments in either the Navy’s military justice database or its personnel database. They stated that there are few individuals who have access and can input data into the military justice database, and to expand the scope of criminal justice data collected in that manner, more people would have to be hired or assigned to assist with data entry. Similarly, Coast Guard officials said that tracking all nonjudicial punishment cases would be a difficult addition to their current data collection and maintenance workload. Coast Guard officials further stated that in addition to providing commanders with an essential means of providing good order and discipline, nonjudicial punishment also may promote positive change. Some Coast Guard officials stated concerns that recording all nonjudicial punishments in a database may inhibit the rehabilitative component of nonjudicial punishment. While the Army, Navy, and Coast Guard officials expressed these concerns, none of these military services had formally assessed the feasibility of collecting data on nonjudicial punishments. The absence of complete nonjudicial punishment data limits the military services’ visibility into the vast majority of legal punishments imposed on servicemembers under the UCMJ every year. Without such data, these three services will remain limited in their ability to assess or identify disparities among populations subject to this type of punishment. Few Statistically Significant Racial or Gender Disparities Exist in Likelihood of Conviction or Severity of Punishment, but the Coast Guard Does Not Collect and Maintain Complete Data Race Was Not a Statistically Significant Factor in Convictions in General and Special Courts-Martial, but Gender Was in the Marine Corps Among the servicemembers convicted in general and special courts- martials, we found no statistically significant differences regarding the likelihood of conviction among racial groups in the Army, the Navy, the Marine Corps, and the Air Force, while controlling for other attributes, as shown in figure 12 below. In the Marine Corps, male servicemembers were more likely to be convicted compared to female servicemembers. We found no statistically significant differences in the likelihood of convictions between males and females in the Army, the Air Force, and the Navy. In the military services that maintained complete punishment data—the Army, the Navy, the Marine Corps, and the Air Force—we found that minority servicemembers were either less likely to receive a more severe punishment in general and special courts-martial compared to White servicemembers, or there were no statistically significant differences in punishments among racial groups. Our findings regarding gender varied among the services. Male servicemembers were more likely to receive a more severe punishment compared to females in the Marine Corps, the Army, and the Air Force; for the Navy, we found there were no statistically significant differences in punishments between males and females. Navy and Marine Corps: Among servicemembers that were convicted in general and special courts-martial in the Marine Corps, we found no statistically significant differences regarding minority servicemembers being more likely or less likely to receive a dismissal or discharge punishment versus some other punishment, while controlling for other attributes, as shown in figure 13 below. In the Navy, among servicemembers that were convicted in general and special courts- martial, Black servicemembers were less likely than White servicemembers to receive a discharge or dismissal. We found no statistically significant differences regarding Hispanic servicemembers or those of Other races in the Navy. In the Marine Corps, among servicemembers that were convicted in general and special courts-martial, male servicemembers were more likely than female servicemembers to receive a discharge or dismissal. In the Navy, there were no statistically significant differences in punishments between males and females. Army and Air Force: We found no statistically significant differences regarding Black or Hispanic servicemembers being more likely or less likely to receive a more severe punishment in the Air Force or the Army, while controlling for other attributes, as shown in figure 14 below. We also found that servicemembers in the Other race group were less likely to receive a more severe punishment compared to White servicemembers in the Army, but punishment results for servicemembers in the Other race group in the Air Force were not statistically significant. Additionally, we found that male servicemembers were more likely to receive a more severe punishment compared to female servicemembers in the Army and the Air Force. We could not determine disparities in case outcomes—convictions and punishment severity—in the Coast Guard’s general and special courts- martial for fiscal years 2013 through 2017 because the Coast Guard did not collect and maintain complete conviction and punishment data in its military justice database. Specifically, 16 percent of all Coast Guard cases were missing conviction and punishment data. When broken down by court-martial type, 20 percent of general court-martial cases, 15 percent of special court-martial cases, and 4 percent of summary court- martial cases were missing conviction and punishment data. Coast Guard officials acknowledged that incomplete conviction and punishment data entry is a consistent problem. They said that data entry had improved recently. On December 17, 2018, the General Counsel of the Department of Defense issued the uniform standards and criteria required by article 140a of the Military Justice Act of 2016. As part of these uniform standards, the services were directed to collect information about the findings for each offense charged, and the sentence or punishment imposed. The military services are to implement the Secretary’s direction no later than December 23, 2020. DOD and the Military Services Have Conducted Some Assessments of Military Justice Disparities, but Have Not Studied the Causes of Disparities DOD and the military services have conducted some assessments of disparities in the military justice system. We previously reported in 1995 on DOD studies on discrimination and equal opportunity, and found DOD and the services conducted seven reviews of racial disparities in discipline rates between 1974 and 1993. Since our 1995 report through 2016, DOD and service assessments of military justice disparities have been limited. Officials in the Office of Diversity, Equity and Inclusion (ODEI) noted DOD has not conducted any department-wide assessments of racial or gender disparities in military justice during this period. The military services’ diversity offices also were not able to identify any service-specific reviews of disparities in military justice. However, the military services have some initiatives to examine and address disparities in military justice. For example, Air Force officials said that in May 2016, the Air Force conducted a servicewide data call to solicit information about cases involving a challenge to a member of a court-martial based on race or a motion for selective prosecution. The officials said that a thorough review revealed no evidence of selective prosecution in Air Force courts-martial. In addition, the Air Force has conducted analyses of its own military justice data. Specifically, the Air Force routinely analyzes military justice data using a rates-per-thousand analysis to identify whether certain demographic groups are tried by court-martial or subject to nonjudicial punishments at higher rates than others. These Air Force analyses found that Black and male servicemembers were more likely than White and female servicemembers to be subject to courts-martial and nonjudicial punishments from fiscal years 2013 through 2017, which is consistent with what we found. However, the other services do not routinely conduct such analyses. Moreover, DOD has conducted climate surveys to address servicemembers’ perceptions of bias. In 2013, for example, DOD conducted service-wide equal opportunity surveys that queried servicemembers on whether they believed they received nonjudicial punishment or a court martial they should not have, and whether they believed their race or ethnicity was a factor. The survey responses showed that 1.3 percent of servicemembers indicated experiencing a perceived undue punishment, a result that was unchanged from the 2009 survey. Minority members were more likely to indicate experiencing perceived undue punishment than White members, but there were no significant differences between racial or ethnic groups who indicated experiencing undue punishment. ODEI officials told us that their office did not make any recommendations related to military justice as a result of these 2013 survey results because the findings were too small to warrant such steps. Moreover, ODEI officials said that while they have not completed their analysis of the 2017 survey data, the question about receiving nonjudicial punishment or court-martial had been removed from the 2017 survey. ODEI officials explained that the question was removed because the perception of unfair punishment was not the goal of the survey, although they said that the question could be reinstated for future surveys if the goals for the survey change. In June 2017, ODEI initiated a review of the military justice system following the publication of a report by a non-profit organization that found racial disparities in military justice actions. According to ODEI officials, their review assesses disparities in the military justice system using a similar analysis to that in the non-profit organization’s report, which analyzed rates of military justice actions per thousand servicemembers. ODEI officials told us they also observed racial and gender disparities among servicemembers involved in the military justice system in their own analysis of the service data. The officials said that the report on the results of their review will not directly address the issue of whether bias exists in the military justice process or the causes of any disparities, but will serve as a precursor to a future research study that looks more comprehensively into the issue of whether bias exists in the military justice system. ODEI officials said that their report should be issued in 2019. Standards for Internal Control in the Federal Government state that management uses quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. The standards further provide that management should evaluate issues identified through monitoring activities and determine appropriate corrective actions. Officials from DOD and the military services acknowledged that they do not know the cause of the racial and gender disparities that have been identified in the military justice system. This is because they have not conducted a comprehensive evaluation to identify potential causes of these disparities and make recommendations about any appropriate corrective actions to remediate the cause(s) of the disparities. By conducting a comprehensive analysis into the causes of disparities in the military justice system, DOD and the military services would be better positioned to identify actions to address disparities, and thus help ensure that the military justice system is fair and just, a key principle of the UCMJ. Conclusions The single overarching principle of the UCMJ is that a system of military law can foster a highly disciplined force if it is fair and just, and is recognized as such by both members of the armed forces and by the American public. DOD and the military services collect and maintain data on the race, ethnicity, and gender of all servicemembers. However, these data vary within and across the services, limiting the ability to collectively or comparatively assess military justice data to identify any disparities. DOD has recently taken steps to address this issue by directing the military services to, no later than December 23, 2020: collect uniform race and ethnicity data in their military justice databases, or aggregate any expanded ethnic or racial categories to the categories listed in the standards; collect either the social security number or DOD identification number in their military justice databases; and collect complete summary courts-martial information. It will be important for the military services to complete these actions to allow for efficient analysis and reporting of consistent military justice data. However, the newly issued standards apply only to the military justice databases and not to the investigations and personnel databases. The ability to query and report on the gender of servicemembers in its military justice database would provide the Coast Guard with more readily available data to identify or assess any gender disparities that may exist in the investigation and trial of military justice cases without merging data from multiple databases. Moreover, taking steps to develop the capability to present the race and ethnicity data from the military services’ personnel and investigations databases using the same categories included in the December 2018 standards for the military justice databases would enable DOD and the military services to more easily and efficiently assess the extent to which there are any racial or ethnic disparities throughout the military justice process. Further, DOD’s annual reports about the number and status of pending military justice cases do not include demographic information, such as breakdowns by race or gender, about servicemembers who experienced a military justice action. Reporting this information would provide servicemembers and the public with greater visibility into potential disparities and help build confidence that DOD is committed to a military justice system that is fair and just. Moreover, DOD does not have guidance that establishes criteria to determine when data indicating possible disparities among racial, ethnic, or gender groups in the investigations, trials, or outcomes of cases in the military justice system should be further reviewed, or describes the steps that should be taken to conduct such further review. By establishing such criteria, DOD and the services would be better positioned to monitor the military justice system to help ensure that it is fair and just, a key principle of the UCMJ. Our analysis of available data identified racial and gender disparities in all of the military services for servicemembers with recorded investigations, and for four of the military services for trials in special and general courts- martial, but these disparities generally were not present in the convictions or punishments of cases. These findings suggest disparities may be limited to particular stages of the military justice process for the period covered by our analysis. However, we were unable to determine whether there were disparities among servicemembers subject to nonjudicial punishments in the Army, the Navy, and the Coast Guard because these services do not collect complete nonjudicial punishment data, such as data on the servicemember’s race, ethnicity, gender, offense, and punishment for all nonjudicial punishments, in any of their databases. The absence of complete nonjudicial punishment data in the Army, the Navy, and the Coast Guard limits their visibility into the vast majority of legal punishments imposed on servicemembers under the UCMJ every year. Without such data, these three services will remain limited in their ability to assess or identify disparities among populations subject to this type of punishment. Finally, DOD recently conducted a study of racial and gender disparities in the military justice system, and expects to complete its report in 2019. However, this study will not assess the causes of the racial and gender disparities identified in the military justice system. Our findings of racial and gender disparities, taken alone, do not establish whether unlawful discrimination has occurred, as that is a legal determination that would involve other corroborating information along with supporting statistics. By conducting a comprehensive evaluation of the causes of these disparities, DOD and the military services would be better positioned to identify actions to address disparities, and thus help ensure that the military justice system is fair and just, a key principle of the UCMJ. Recommendations for Executive Action We are making a total of 11 recommendations, including 3 to the Secretary of Homeland Security, 3 to the Secretary of Defense, 2 to the Secretary of the Army, 2 to the Secretary of the Navy, and 1 to the Secretary of the Air Force. The Secretary of Homeland Security should ensure that the Commandant of the Coast Guard modifies the Coast Guard’s military justice database so that it can query and report on gender information. (Recommendation 1) The Secretary of the Army should develop the capability to present servicemembers’ race and ethnicity data in its investigations and personnel databases using the same categories of race and ethnicity established in the December 2018 uniform standards for the military justice databases, either by (1) modifying the Army’s investigations and personnel databases to collect and maintain the data in accordance with the uniform standards, (2) developing the capability to aggregate the data into the race and ethnicity categories included in the uniform standards, or (3) implementing another method identified by the Army. (Recommendation 2) The Secretary of the Air Force should develop the capability to present servicemembers’ race and ethnicity data in its investigations and personnel databases using the same categories of race and ethnicity established in the December 2018 uniform standards for the military justice databases, either by (1) modifying the Air Force’s investigations and personnel databases to collect and maintain the data in accordance with the uniform standards, (2) developing the capability to aggregate the data into the race and ethnicity categories included in the uniform standards, or (3) implementing another method identified by the Air Force. (Recommendation 3) The Secretary of the Navy should develop the capability to present servicemembers’ race and ethnicity data in its investigations and personnel databases using the same categories of race and ethnicity established in the December 2018 uniform standards for the military justice databases, either by (1) modifying the Navy’s investigations and personnel databases to collect and maintain the data in accordance with the uniform standards, (2) developing the capability to aggregate the data into the race and ethnicity categories included in the uniform standards, or (3) implementing another method identified by the Navy. (Recommendation 4) The Secretary of Homeland Security should ensure that the Commandant of the Coast Guard develops the capability to present servicemembers’ race and ethnicity data in its investigations and personnel databases using the same categories of race and ethnicity established in the December 2018 uniform standards for the military justice databases, either by (1) modifying the Coast Guard’s investigations and personnel databases to collect and maintain the data in accordance with the uniform standards, (2) developing the capability to aggregate the data into the race and ethnicity categories included in the uniform standards, or (3) implementing another method identified by the Coast Guard. (Recommendation 5) The Secretary of Defense should ensure that the Joint Service Committee on Military Justice, in its annual review of the UCMJ, considers an amendment to the UCMJ’s annual military justice reporting requirements to require the military services to include demographic information, including race, ethnicity, and gender, for all types of courts-martial. (Recommendation 6) The Secretary of Defense, in collaboration with the Secretaries of the military services and the Secretary of Homeland Security, should issue guidance that establishes criteria to specify when data indicating possible racial, ethnic, or gender disparities in the military justice process should be further reviewed, and that describes the steps that should be taken to conduct such a review. (Recommendation 7) The Secretary of the Army should consider the feasibility, to include the benefits and drawbacks, of collecting and maintaining complete information for all nonjudicial punishment cases in one of the Army’s databases, such as information on the servicemembers’ race, ethnicity, gender, offense, and punishment imposed. (Recommendation 8) The Secretary of the Navy should consider the feasibility, to include the benefits and drawbacks, of collecting and maintaining complete information for all nonjudicial punishment cases in one of the Navy’s databases, such as information on the servicemembers’ race, ethnicity, gender, offense, and punishment imposed. (Recommendation 9) The Secretary of Homeland Security should ensure that the Commandant of the Coast Guard considers the feasibility, to include the benefits and drawbacks, of collecting and maintaining complete information for all nonjudicial punishment cases in one of the Coast Guard’s databases, such as information on the servicemembers’ race, ethnicity, gender, offense, and punishment imposed. (Recommendation 10) The Secretary of Defense, in collaboration with the Secretaries of the military services and the Secretary of Homeland Security, should conduct an evaluation to identify the causes of any disparities in the military justice system, and take steps to address the causes of these disparities as appropriate. (Recommendation 11) Agency Comments and Our Evaluation We provided a draft of this report to DOD and the Department of Homeland Security for review and comment. Written comments from DOD and the Department of Homeland Security are reprinted in their entirety in appendixes X and XI, respectively. DOD and the Department of Homeland Security provided additional technical comments, which we incorporated in the report, as appropriate. In written comments, DOD concurred with six recommendations, and partially concurred with two recommendations that were directed to the Secretary of Defense. The Department of Homeland Security concurred with the three recommendations directed to the Secretary of Homeland Security. DOD concurred with our six recommendations to present servicemembers’ race and ethnicity data in each of the military services’ respective investigations and personnel databases using the same categories of race and ethnicity established for their military justice databases; consider an amendment to the UCMJ’s annual military justice reporting requirements to require the military services to include demographic information for all types of courts-martial; and consider the feasibility of collecting and maintaining complete information for all nonjudicial punishment cases. DOD partially concurred with two of our recommendations, agreeing with the content, but requesting that we modify the recommendations to direct them to more appropriate entities. Specifically, DOD concurred with our recommendations that guidance should be issued to establish criteria specifying when data indicating possible racial, ethnic, or gender disparities require further review and the steps that will be taken to conduct the review; and to conduct an evaluation to identify the causes of any racial or gender disparities in the military justice system and, if necessary, take remedial steps to address the causes of these disparities. For both recommendations, DOD suggested that the Secretary of Homeland Security be added, and that we remove the DOD Office for Diversity, Equity and Inclusion and the Commandant of the Coast Guard, as they fall under the Secretary of Defense and the Secretary of Homeland Security, respectively. We agree with DOD’s suggestions, and we have modified both recommendations accordingly. In an email correspondence, the Department of Homeland Security and the Coast Guard concurred with the updates. In its written comments, the Department of Homeland Security concurred with our three recommendations to modify the Coast Guard’s military justice database so that it can query and report on gender information, to present servicemembers’ race and ethnicity data in its investigations and personnel databases using the same categories of race and ethnicity established for the military justice database, and to consider the feasibility of collecting and maintaining complete information for all nonjudicial punishment cases. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, and the Acting Secretary of Homeland Security. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your members of your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix XII. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to assess the extent to which (1) the military services collect and maintain information about the race, ethnicity, and gender of servicemembers investigated and disciplined for violations of the Uniform Code of Military Justice (UCMJ) that can be used to assess disparities; and (2) there are racial and gender disparities in investigations, disciplinary actions, and case outcomes in the military justice system, and whether the Department of Defense (DOD) and the military services have taken steps to study any identified disparities. Methods Used to Address Both Objectives To address both of our objectives, we analyzed data collection, data maintenance, and military justice disciplinary actions involving active-duty servicemembers in the Army, the Navy, the Marine Corps, the Air Force, and the Coast Guard. Although the Coast Guard is part of the Department of Homeland Security, the Coast Guard is a military service and a branch of the armed forces at all times. We analyzed military justice actions initiated and recorded in service investigations and military justice databases between fiscal years 2013 through 2017. We chose this time period because it provided the most recent history of available military justice data at the time of our review. We requested record-level data from each of the military services’ personnel, investigations, and military justice databases, which resulted in a total of 15 data requests. Table 6 below provides an overview of the databases included in our review, broken out by database type. We sent individual data requests that were tailored based on our conversations with service officials and our own analysis of the availability of data. In addition to requesting the race, ethnicity, and gender of servicemembers subject to military justice actions, we also requested other demographic and administrative attribute data—such as rank, age, years of service, duty station, and occupation—from the services’ personnel databases to include in our statistical models. We identified these attributes by reviewing relevant literature and interviewing agency officials. Personnel databases. We requested and received monthly snapshots with record-level data on all active-duty servicemembers in each of the military services from fiscal years 2013 through 2017. Specifically, we requested demographic and administrative data, including race, ethnicity, gender, rank, education, age or date of birth, years of service, occupation, location or duty station, deployed status, administrative or disciplinary actions and dates, character of service separation, and servicemembers’ unique identifiers (social security number and employee identification number). Investigations databases. We requested and received record-level data on all investigations recorded in a military service military criminal investigative organization (MCIO) database that were initiated from fiscal years 2013 through 2017, where the subject of the investigation was an active-duty servicemember. For each case, we requested certain attribute data on the investigation subject, including race, ethnicity, gender, rank, age or date of birth, service and component, offense(s) investigated, case initiation date, investigation source, investigating entity, investigation outcome and date, incident location, and the subject’s unique identifier, such as social security number or employee identification number. In some services not all of these attributes were available or requested. For example, since the Air Force database only included investigations conducted by the Air Force Office of Special Investigations, we did not request information about the investigating entity. In addition, the Navy Criminal Investigative Service provided us with data about and we analyzed closed cases only, whereas the Army and the Air Force MCIOs provided us with data about and we analyzed all cases in their database during the period of our review. Military justice databases. We requested and received record-level data on all cases where a servicemember was subject to disciplinary proceedings under the Uniform Code of Military Justice (UCMJ) from fiscal years 2013 through 2017. For each case where charges were preferred against a servicemember during this period, we requested demographic and administrative data on the servicemember as well as key information related to their case, including race, ethnicity, gender, rank, age or date of birth, component, case type and forum, offense(s) charged, case disposition and date, appeals status, case outcome or sentence, disciplinary action taken, date charges were first preferred, and the servicemember’s unique identifier, such as social security number or employee identification number. We received general and special courts- martial data from all of the services from their military justice databases. For the Army, in addition to data from their military justice database, Military Justice Online, we also received courts-martial data from a separate database, called the Army Court-Martial Information System (ACMIS), which is used by the service’s trial judiciary to track courts- martial. For summary courts-martial and nonjudicial punishments, the services varied in the extent that and the location where they collected and maintained complete data for these two military justice actions, as is discussed further earlier in this report. In the Air Force, summary courts-martial and nonjudicial punishment data is maintained in the service’s military justice database, the Automated Military Justice Analysis and Management System. The Marine Corps did not collect and maintain complete data about summary courts-martial or nonjudicial punishments in its military justice database, however, its personnel database included information about all summary courts-martial and nonjudicial punishments imposed on servicemembers during the period of our review. The Army and the Navy did not collect and maintain complete data about summary courts-martial or nonjudicial punishments in their military justice databases, or other databases. In these services, summary courts-martial and nonjudicial punishments were recorded in their military justice databases if these actions had involvement by the services’ legal offices. Further, summary courts-martial and nonjudicial punishments were recorded in the personnel databases used by these services only if these actions resulted in an administrative action against the accused, such as a forfeiture of pay or reduction in grade. The Coast Guard did not collect and maintain complete data about nonjudicial punishments in its military justice database or other databases; nonjudicial punishments were recorded in its military justice database if a legal office was involved in the action. Further, nonjudicial punishments were recorded in the Coast Guard’s personnel database if they resulted in an administrative action against the accused, such as a forfeiture of pay or reduction in grade. Methods Used to Evaluate Collection and Maintenance of Data To evaluate the extent to which the military services collect and maintain race, ethnicity, and gender data about servicemembers investigated and disciplined for violations of the UCMJ, we first reviewed service guidance, user manuals, and other documents related to the services’ investigations, military justice, and personnel databases. We reviewed these documents to determine: the types of data officials are required to collect and maintain; and the internal procedures the services follow in inputting information about race, ethnicity, and gender data into each type of database. For example, we determined whether collection of this information was mandatory, and how this information was entered into and recorded in each database. Specifically, we determined whether information about race, ethnicity, and gender was entered into each database manually, using a drop-down menu, or was auto-populated from another database. Further, we identified the number of possible response options that each database contained for each of these demographic fields. Second, we interviewed service officials who manage and use the military justice, investigations, and personnel databases to discuss: which fields in each database track the race, ethnicity, and gender of how these data are input and their insights regarding the reliability of these data. Specifically, we interviewed officials from the legal branches of the military services, including the Army Office of the Judge Advocate General, the Navy Judge Advocate General’s Corps, the Marine Corps’ Judge Advocate Division, the Air Force Judge Advocate General’s Corps, and the Coast Guard Office of the Judge Advocate General. In addition, we spoke with officials in the military criminal investigative organizations (MCIO), including the Army Criminal Investigation Command, the Naval Criminal Investigative Service, the Air Force Office of Special Investigations, and the Coast Guard Investigative Service. We also interviewed officials from the manpower and personnel offices of the services with responsibility for the services’ personnel databases, including the Army’s Human Resources Command and the Office of the Deputy Chief of Staff; the Navy’s Personnel Command; the Marine Corps Manpower and Reserve Affairs Manpower Information Systems Branch; the Air Force Personnel Center; and the Coast Guard’s Personnel Service Center. Finally, we analyzed the data we received from the investigations, military justice, and personnel databases to determine the completeness of the race, ethnicity, and gender information that was recorded in each of the databases. We assessed the military services’ systems and procedures for collecting data against DOD and service guidance and relevant federal internal control standards. Methods Used to Evaluate Racial, Ethnic, and Gender Disparities To evaluate the extent to which there are racial, ethnic, and gender disparities in investigations, disciplinary actions, and case outcomes, we analyzed data from the military services’ investigations, military justice, and personnel databases to determine summary statistics and we then conducted bivariate and multivariate regression analyses. Investigations. We focused on alleged violations of the UCMJ that were recorded in databases used by service-specific MCIOs. Investigations are recorded in the MCIO databases when a servicemember is the subject of a criminal allegation made by another person; for purposes of this report, we say the servicemember had a “recorded investigation” to describe these cases. We analyzed investigation information from the databases used by each of the military services’ MCIOs. Specifically, we analyzed data from the Army’s Criminal Investigation Command, which included cases investigated by military police and Criminal Investigation Command; the Navy and Marine Corps’ Naval Criminal Investigative Service, which included cases investigated by the Naval Criminal Investigative Service and military police; the Air Force’s Office of Special Investigations, which included only Office of Special Investigations cases; and the Coast Guard Investigative Service, which included only Coast Guard Investigative Service cases. Our analysis of recorded investigations data did not include investigations conducted by a servicemember’s command, because those investigations are not recorded in the MCIO databases. Military Justice Discipline. We included in our definition of servicemembers disciplined for a violation of the UCMJ those servicemembers with cases that resulted in a trial in any type of court- martial (general, special, and summary), or servicemembers who were subject to a nonjudicial punishment from fiscal years 2013 through 2017. We analyzed data for trials in general and special courts-martial separately from trials in summary courts-martial because general and special courts-martial result in a criminal conviction if the servicemember is found guilty, while summary courts-martial are not a criminal forum and do not result in a criminal conviction. We analyzed general and special courts-martial cases together due to the small number of cases for some racial or gender groups. In addition, we also separated general and special courts-martial into cases that either were or were not preceded by an investigation recorded in an MCIO database. Our analysis of general and special courts-martial cases without a recorded investigation included those general and special courts-martial that were investigated by a servicemember’s command or other law enforcement entities. We used the preferral date, or the date when an accused servicemember was first charged with a violation, to count the number of courts-martial that occurred in a given fiscal year. However, each military service uses the date in which the court-martial judgment was given when reporting the number of each type of court-martial in their annual reports to the Court of Appeals for the Armed Forces. As a result, the number of court-martial cases in a given year analyzed for our review differs from what was reported in the annual reports. In discussions with officials after we had completed our preliminary analyses, they recommended that we use the referral date instead of the preferral date, so that our total number of cases would be more consistent with the number of cases that they reported. However, changing the date for grouping cases would have required us to request new military justice data from each of the military services, and conduct additional work. Above all, using the preferral date would not impact the findings of racial and gender disparities. In addition, our analyses only counted cases that were ultimately tried at general, special, or summary courts-martial, and excluded those cases where charges were dismissed, withdrawn, or subject to some alternate resolution. For nonjudicial punishments, we used the date that the punishment was imposed. To prepare the data for our analyses and ensure that we had consistent profiles for the race, ethnicity, and gender of the servicemembers, we merged records from the military services’ investigations, military justice, and personnel databases. We merged records using servicemembers’ unique identifiers, such as social security number or employee identification number, that were common among a particular service’s databases. In some instances—a small proportion of cases—we could not match personnel records with military justice records because the social security number or employee identification number in the military justice database did not match the information in the personnel database. In other instances, we could not match personnel records with military justice records because the military justice records did not contain a social security number or employee identification number to match with information found in their personnel record. We first tried to match these cases using the servicemembers’ name and date of birth; however, in some cases we were unable to match personnel records with investigations or military justice cases. As a result, we compiled lists of those cases we were unable to match, and we provided the services with lists of these cases. Service officials manually looked up this data and provided us with the missing social security numbers or employee identification numbers for these cases so that we could complete our analyses. These manual look up efforts increased our match rates so that we had a data set that we determined was sufficiently complete to perform our analyses. For servicemembers who were the subjects of military justice actions, we used the attribute data that was available in the personnel database at the time an investigation or disciplinary action was initiated (the preferral date for courts-martial). For our total service populations, which included servicemembers who were not the subject of a military justice action, we used their attribute data from the “median” snapshot of the five fiscal years of personnel data we received. Based on discussions with service officials, we treated the personnel databases as the authoritative sources for servicemembers’ demographic and administrative data. For some services when needed, if we identified a discrepancy in the race or gender value for a servicemember between the data in the personnel and military justice databases, we used the value recorded in the personnel database because service officials had told us that the personnel databases were the official sources for demographic data such as race and gender, and would be more likely to contain more reliable data for these fields than the investigations or military justice databases. For some services where there were cases where an attribute value was missing in the personnel database, we used the military justice or investigative database as a secondary source for this information. In merging the records from the personnel, military justice, and investigations databases, we created a single data file for each service that contained attribute data for all active-duty servicemembers, as well as complete information on the investigation and discipline of servicemembers who were the subject of a military justice action from fiscal years 2013 through 2017. In using this methodology to merge the records, the total number of servicemembers we use in our report when discussing the total service populations for each service is greater than the total active-duty force end strength of that service in any given fiscal year. This is because our total service populations represent the number of unique individuals who served on active duty from fiscal years 2013 through 2017. In addition, as part of our data preparation, we consolidated the various race and ethnicity values in the service personnel databases to the five groups for race and the two groups for ethnicity established by Office of Management and Budget (OMB) standards for maintaining, collecting, and presenting data on race and ethnicity for all federal reporting purposes. The five race groups in the standards are American Indian or Alaska Native; Asian; Black or African American; Native Hawaiian or Other Pacific Islander; and White. The two ethnic groups are Hispanic or Latino and Not Hispanic or Latino. First, we collapsed race and ethnicity data into a single combined field. Specifically, we grouped individuals of Hispanic ethnicity together, regardless of their racial identification, so that we could compare those of Hispanic ethnicity to other racial groups. We did this in part because of the ways in which some of the services record these data in their databases. For example, the Navy’s and the Marine Corps’ military justice databases do not have separate fields for race and ethnicity; instead, the values are tracked in a single field. Throughout the discussion for objective 2 of this report, we refer to the combined race and ethnicity values as race. We then consolidated races to the five racial groups in the OMB standards. When military service personnel databases included different or additional possible options for race and ethnicity than the groups established by the OMB standards, we consolidated the options in accordance with the definitions for each race and ethnicity listed in the OMB standards. Given the small number of cases in some racial groups, we collapsed certain racial groups into an “Other” group in order to report statistically reliable results. The “Other” group includes individuals who identified as Asian, Native Hawaiian/Other Pacific Islander, American Indian/Alaska Native, and multiple races. Summary statistics. We analyzed data from the military services’ investigations, military justice, and personnel databases to determine the extent to which racial and gender groups were the subjects of recorded investigations, tried in courts-martial, and subject to nonjudicial punishments (for Army and Marine Corps, services for which we had complete data) at higher rates or lower rates than each racial and gender group’s proportion of the overall service populations. Other than our analysis of recorded investigations, we did not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. To conduct this analysis, we used data on all active-duty servicemembers to identify what proportion each racial group (White, Black, Hispanic, and Other) and gender group (male, female) made up of the overall service population from fiscal years 2013 through 2017. We then used data from the services’ military justice or personnel databases to calculate the representation of each racial and gender group as a percent of the population subjected to each type of military justice action. We also examined the rates at which certain racial and gender groups were charged with drug offenses (Article 112a) and sexual assault offenses (Article 120) compared to their proportions of the overall service populations. See Appendix III for information regarding recorded investigations and general and special courts-martial of drug and sexual assault offenses. We analyzed these two specific UCMJ offenses because officials from some services told us that an investigation into these offenses may frequently be mandatory, and thus could potentially mitigate the risk of bias. To conduct this analysis, we used offense data from the services’ military justice databases to determine each racial and gender group’s representation in the population that was the subject of a military justice action for a drug, sexual assault, or other offense type. Bivariate and Multivariate Regression Analyses. We developed a logistic regression model using the data we received from the services’ investigations and military justice databases to determine the extent that certain attributes were associated with higher rates of investigation or discipline of servicemembers. We conducted bivariate logit analyses to estimate the association between select attribute factors (or independent variables) and the outcome variables (the dependent variable) in a binary format, except for the two offense outcome variables. Table 7 below lists all of the dependent and independent variables we used in our analyses. To conduct our statistical analyses, we created groups for each demographic and administrative attribute (independent variable) that we tested in our regression model. We created these groups based on input and guidance from service officials. While the modeling subgroups we created are largely consistent across services, some values are different for certain services. Table 8 summarizes the modeling groups we constructed for each service for each attribute included in our regression analyses. When analyzing the severity of punishments, we developed two groups for the Navy and the Marine Corps, and three groups for the Air Force and the Army, as shown in table 9 below. We did not create a third punishment group for confinement without dismissal or discharge for the Navy and the Marine Corps because of the small number of cases with confinement that did not also include some sort of discharge. Based on discussions with service officials, we determined that a sentence resulting in a dismissal or discharge was the most severe punishment outcome. Typically, a logistic regression model is appropriate when the model outcome is a binary (yes/no) response. Because the punishment groups for the Army and the Air Force were not binary, they could not be analyzed using a multivariate logistic regression. Instead, we used an ordered logit model, also called an ordered logistic regression model, to analyze punishment severity in the Army and the Air Force. An ordered logistic regression is an extension of the logistic regression model that applies to dependent variables where there are more than two response categories. This model allowed us to examine the degree to which a racial or gender group was more likely or less likely than another group to receive a more severe punishment in general and special courts-martial, while controlling for other attributes, such as gender, education, rank, composition of panel, and offense type. To conduct this analysis, we reviewed outcome data from the services’ personnel, investigations, and military justice databases. Based on our bivariate analyses, we determined which variables were significantly associated with military justice actions, and that appeared to be statistically significant predictors of an individual’s likelihood to be subject to a military justice action. Appendix IX includes a summary of those indicators for each of the services. We also examined correlation matrices of the independent variables to determine where there were high correlations between two variables. Where variables were highly correlated, we chose one variable over the others or created a hybrid variable combining those two variables. Specifically, we excluded age and years of service for most of the military services, due to high correlation with the rank variable. Based on our discussions with service officials, they indicated that rank would be the preferred variable to include in our analyses if selecting only one variable among rank, age, and years of service. However, for the Air Force, based on discussion with Air Force officials, we did control for years of service among the lower enlisted ranks (E1-E4). In addition, we could not include education for the Army due to variability and overlapping values in the data. Further, we chose not to model attributes such as occupation and location due to the great variability in these data and the difficulty in creating groups and reaching agreement about those groups with service officials. Based on these results, we then conducted a series of multivariate logistic regression models. Multivariate logistic regression modeling is a statistical method that examines several variables simultaneously to estimate whether each of these variables are more likely or less likely to be associated with a certain outcome. A multivariate regression analysis analyzes the potential influence of each individual factor on the likelihood of a binary outcome (e.g., a specific military justice action) while simultaneously accounting for the potential influence of the other factors. This type of modeling allowed us to test the association between servicemember characteristics, such as race or gender, and the odds of a military justice action (shown as the outcome variables in table 7 above), while holding other servicemember attributes constant (such as gender, rank, and education, shown as the independent variables in table 7 above). We conducted a separate regression for each of the military justice actions listed as an outcome variable. We selected this type of model because it could account for the attributes simultaneously. For the purposes of consistency, in our multivariate regression analyses, we made all racial comparisons with White servicemembers as the reference category. Similarly, we made all gender comparisons with female servicemembers as the reference category. A logistic regression model provides an estimated odds ratio, where a value greater than one indicates a higher or positive association; in this case, between the race, ethnicity, or gender of a servicemember (the independent variables) and the likelihood of being the subject of a military justice action (the dependent, or outcome, variable). An estimated odds ratio less than one indicates lower odds or likelihood of being the subject of a military justice action when a factor—here, a specific demographic or administrative attribute—is present. The statistical significance of the logistic regression model results is determined by a p-value of less than 0.05. As a result, in our report we state that odds ratios that are statistically significant and greater than 1.00 or lower than 1.00 indicate that individuals with that characteristic are more likely or less likely, respectively, to be the subject of a particular outcome or military justice action. In cases where the p-value was greater than 0.05, we report that we could not identify any statistically significant differences, which means that we could not conclude that there was an association between race or gender and the likelihood of a military justice action. We report the results from our regression models as odds ratios. We generally report multivariate results from testing associations between key attributes—including race, ethnicity, gender, rank, and education—on a servicemember’s likelihood of being investigated and disciplined for a UCMJ violation. In the body of this report, we focused on race and gender disparities among servicemembers investigated and disciplined for violations of the UCMJ, while holding other factors constant; however, our analyses of recorded investigations and general and special courts- martial for drug and sexual assault offenses are discussed in Appendix III. In all of these analyses for the Air Force, we also controlled for years of service among the lower enlisted ranks (E1-E4). In the analyses we conducted for the Army, we could not control for education, but we were able to control for age. All regression models are subject to limitations. For our analyses, the limitations included: Results of our analyses are associational and do not imply a causal relationship. We did not identify the causes of any racial or gender disparities, and the results of our work alone should not be used to make conclusions about the military justice process. Our analyses of these data in finding the presence or absence of racial or gender disparities, taken alone, do not establish the presence or absence of unlawful discrimination, as that is a legal determination that would involve other corroborating information along with supporting statistics. We could not assess some attributes that potentially could be related to a servicemember’s likelihood of facing a military justice action in the data analyzed for this review. For example, a servicemember’s socioeconomic background or receipt of a waiver upon entering the service could potentially be related to the likelihood of being investigated, tried in a court-martial, or subject to a nonjudicial punishment. However, we were unable to test these associations because most services indicated they did not have information about socioeconomic status or waivers in the databases that we requested data from. Furthermore, while some other attributes may have been available—such as marital status of the subject or the number of dependent children—we did not include these attributes in our data requests because we prioritized analyzing other demographic factors based on our background research and conversations with service officials. As outlined above, we incorporated input from service officials to the extent possible as we prepared our modeling groups for the demographic and administrative attributes we tested, such as rank, education, and years in service. However, this process was necessarily imprecise. Our modeling results may have been impacted by our discretionary decisions to include certain values in the groups we created for these variables. Data reliability. We conducted data reliability assessments on the datasets we received from the databases in our review. We examined the documentation officials provided to us on each database and conducted electronic tests on the data we received to check for completeness and accuracy. We also sent data reliability questionnaires to database managers about how the data are collected and their appropriate uses, and had discussions with database managers to discuss the reliability of the data in their databases. When we determined that particular fields were not sufficiently reliable, we excluded them from our analysis. For example, we did not use data in our analysis where a substantial number of values were missing. We also checked to see that the values for variables were internally consistent and that results were not affected unduly by outlier values that might suggest miscoded values. For the purposes of our analysis, we found the variables we ultimately reported on to be sufficiently reliable. Furthermore, due to the sensitivity of the information analyzed in this report, we did not include information in instances where the number of servicemembers subjected to a particular military justice action was fewer than 20, to protect privacy. Literature review. To assess the extent to which disparities in the military justice system and the civilian justice system had been previously assessed, we conducted a literature review. To identify relevant publications about disparities in the military justice system and the civilian justice system, we performed a literature search of a number of bibliographic databases, including ProQuest Academic, ProQuest Dialog, Scopus, EBSCO, and HeinOnline. We also searched two think tank search engines: Policy File and the Think Tank Search (from the Harvard Kennedy School). We received the following types of publications: scholarly/peer reviewed material, dissertations, and association/think tank/nonprofit publications. To identify publications by DOD and the services related to the military justice system, we reviewed prior GAO reports and asked officials at the DOD Office of Diversity, Equity and Inclusion, and in the services’ respective diversity and inclusion offices to identify relevant publications. We concluded our searches in October 2018. We also asked the service Judge Advocate General offices for publications relevant to disparities in military justice. We also identified publications in our own background information search. We reviewed those publications that assessed racial, ethnic, or gender disparities among servicemembers in the military justice system. While the civilian and military justice systems differ from each other, we selected a few nationwide studies examining disparities in the civilian justice system to summarize in the background section of our report, in order to enhance our understanding of the complexities of the issues, including how others have attempted to measure disparities. We did not assess the methodologies used in any of these studies or the reliability of the data cited in the studies; the studies related to the civilian justice system are discussed in our report to provide broader context for the discussion about racial and gender disparities in the military justice system. We conducted this performance audit from November 2017 to May 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform an audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary Statistics and Bivariate Results for Regression Analyses Rate and Likelihood of Recorded Investigations by Race and Gender As shown in figure 15 below, our analysis of data contained in the military services’ military criminal investigations databases found that Black servicemembers were subjects of recorded investigations at a higher rate compared to their proportion of the overall service population in all of the military services. Hispanic servicemembers were the subjects of recorded investigations at a higher rate compared to their proportion of the overall service population in the Navy and the Air Force, at a lower rate in the Marine Corps, and at the same rate in the Army. Additionally, we found that males were the subjects of recorded investigations at higher rates than their share of the general service population in all of the military services. In addition, figure 15 above also shows the results of our bivariate analyses, which calculated the degree to which one racial or gender group was more likely or less likely than another racial or gender group to be the subject of recorded investigations. Our bivariate analyses found that Black and male servicemembers in all of the military services were statistically significantly more likely to be the subjects of recorded investigations for alleged UCMJ violations than servicemembers of all other races or females. Hispanic servicemembers were statistically significantly more likely in the Navy, the Air Force, and the Coast Guard, and were statistically significantly less likely in the Army to be the subjects of recorded investigations than servicemembers of all other races. Servicemembers in the Other race category were statistically significantly less likely than servicemembers of all other races to be the subjects of recorded investigations in the Army and the Marine Corps. Our bivariate analyses did not show any statistically significant differences for servicemembers in the Other race category in the Navy, the Air Force, or the Coast Guard, or Hispanic servicemembers in the Marine Corps. Rate and Likelihood of Trial in General and Special Courts-Martial As shown in figure 16 below, Black, Hispanic, and male servicemembers in all of the military services included in this analysis were represented at a higher rate than their proportions of the overall service population. White and female servicemembers in all of the military services were represented at a lower rate than their proportions of the overall service population. Servicemembers in the Other race category were represented at a higher rate in the Navy, at a lower rate in the Army and the Air Force, and at the same rate in the Marine Corps compared to their proportion of the overall service population. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. The bivariate regression analysis results in figure 16 above calculate the degree to which one racial or gender group was more likely or less likely than servicemembers of all other races and genders to be tried in general and special courts-martial. We found that Black and male servicemembers in all of the military services were more likely to be tried in general and special courts-martial than servicemembers of all other races or females. Our bivariate analyses found that Hispanic servicemembers in the Army were more likely to be tried in general and special courts-martial than servicemembers of all other races. We found no statistically significant differences in the likelihood of Hispanic servicemembers to be tried in general and special courts-martial compared to servicemembers of all other races in the Navy, the Marine Corps, and the Air Force. White and female servicemembers in all of the military services were less likely to be tried in general and special courts- martial than servicemembers of other races or males. Furthermore, servicemembers in the Other race category were more likely in the Navy and less likely in the Army to be tried in general and special courts-martial than servicemembers of other races. We found no statistically significant differences in the likelihood of servicemembers in the Other race category to be tried in general and special courts-martial in the Marine Corps and the Air Force compared to servicemembers of other races. Rate and Likelihood of Trial in General and Special Courts-Martial Following a Recorded Investigation As shown in figure 17 below, for trials in general and special courts- martial that followed a recorded investigation, Black servicemembers were represented at a lower rate in the Army, the Navy, and the Marine Corps, and at the same rate in the Air Force compared to their proportions of the service population that had recorded investigations. Hispanic servicemembers in trials of general and special courts-martial following a recorded investigation were represented at a higher rate than their proportion of the overall service population that had recorded investigations in the Army and the Marine Corps, and at the same rate in the Navy and the Air Force. White servicemembers were represented at a lower rate in the Army, the Navy, and the Marine Corps, and at the same rate in the Air Force compared to their proportions of the service population with recorded investigations. Servicemembers in the Other race category were represented at a higher rate in the Army, the Navy, and the Marine Corps, and at the same rate in the Air Force compared to their proportions of the overall service population with recorded investigations. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. Male servicemembers with trials in general and special courts-martial that followed a recorded investigation were represented at a higher rate in all of the military services compared to their proportions of the service population that had recorded investigations. Females were represented at a lower rate in all of the military services compared to their proportions of the service population that had recorded investigations. As shown in figure 17 above, our bivariate regression analyses showed that, in the Army, White servicemembers were statistically significantly less likely to be tried in general and special courts-martial following a recorded investigation than servicemembers of all other races, whereas Hispanic servicemembers were statistically significantly more likely to be tried following a recorded investigation. In the Navy, servicemembers in the Other race category were statistically significantly more likely to be tried in general and special courts-martial following a recorded investigation than servicemembers of all other races. Males were more likely, and females were less likely, to be tried in general and special courts-martial following a recorded investigation in the Army and the Air Force. The remaining odds ratios shown in figure 17 above were not statistically significant. Rate and Likelihood of Trial in General and Special Courts-Martial without Recorded Investigation We identified racial and gender disparities in the rate and likelihood of trial in general and special courts-martial in cases without a recorded investigation in all of the military services. Specifically, as shown in figure 18 below, for trials in general and special courts-martial without a recorded investigation, Black and male servicemembers in all of the military services were represented at a higher rate than their proportion of the service population that did not have a recorded investigation. Hispanic servicemembers were represented at a higher rate in the Army and the Marine Corps, and at the same rate in the Navy and the Air Force compared to their proportions of the service population that did not have a recorded investigation. Servicemembers in the Other race category were represented at a lower rate in the Marine Corps and the Air Force, and at the same rate in the Army and the Navy compared to their proportion of the overall service population that did not have a recorded investigation. White and female servicemembers in all of the military services were represented at a lower rate than their proportions of the overall service population without a recorded investigation. We could not analyze Coast Guard cases due to the small number of general and special courts- martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. The bivariate regression analysis results in figure 18 above calculate the degree to which one racial or gender group was more likely or less likely than servicemembers of all other races and genders to be tried in general and special courts-martial without a recorded investigation. We found that Black and male servicemembers in all of the military services were more likely to be tried at special and general courts-martial that were not preceded by a recorded investigation than servicemembers of all other races or females. White and female servicemembers in all of the military services were less likely to be tried at special and general courts-martial that were not preceded by a recorded investigation than servicemembers of all other races and males. We found no statistically significant differences in the likelihood of Hispanic servicemembers or servicemembers in the Other race category in any of the military services being tried in general and special courts-martial without a recorded investigation compared to servicemembers of all other races. Rate and Likelihood of Trial in Summary Courts- Martial in the Air Force and the Marine Corps We identified racial and gender disparities in the rate and likelihood of trial in summary courts-martial in the Air Force and the Marine Corps. Specifically, as shown in figure 19 below, Black and male servicemembers were tried in summary courts-martial for UCMJ violations at higher rates than their share of the overall service population in the Air Force and the Marine Corps. White and Hispanic servicemembers were tried in summary courts-martial at lower rates than their share of the overall service population in both services. Servicemembers that were included in the Other race category were tried at higher rates in the Air Force, and at lower rates in the Marine Corps. We could not determine whether there were any racial or gender disparities for summary courts-martial in the Army and the Navy because these services did not collect complete summary court-martial data— information about all summary court-martial cases, to include demographic information about the subject—in their investigative, military justice, or personnel databases, as discussed above in the report. We could not analyze Coast Guard cases due to the small number of summary courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. The bivariate regression analysis results in figure 19 above calculate the degree to which one racial or gender group was more likely or less likely than servicemembers of all other races and genders to be tried in summary courts-martial. We found that Black servicemembers in the Marine Corps and the Air Force were more likely to be tried in summary courts-martial than servicemembers of all other races. We also found that male servicemembers were more likely than their female counterparts to be tried in summary courts-martial in the Marine Corps and the Air Force. We observed no statistically significant differences in summary court- martial rates for servicemembers in the Other race category in either the Marine Corps or the Air Force, or for Hispanic servicemembers in the Marine Corps. Rate and Likelihood of Nonjudicial Punishments in the Air Force and the Marine Corps As shown in figure 20 below, we found that Black and male servicemembers were subject to nonjudicial punishment for UCMJ violations at a higher rate than their share of the overall service population in the Marine Corps and the Air Force. White servicemembers were subject to nonjudicial punishments at lower rates than their share of the overall service population in both services, and Hispanic servicemembers were subject to nonjudicial punishments in a proportion equal to their share of the general service population in both services. Servicemembers that were included in the Other race category were subject to nonjudicial punishment at lower rates than their share of the overall service population in the Marine Corps and the Air Force. We could not analyze nonjudicial punishments in the Army, the Navy, and the Coast Guard because these services do not collect complete nonjudicial punishment information. The bivariate regression analyses in figure 20 above calculate the degree to which one racial or gender group was more likely or less likely than another racial or gender group to be subject to nonjudicial punishment. We found that Black and male servicemembers were more likely than servicemembers of all other races or female servicemembers to receive nonjudicial punishments in the Marine Corps and the Air Force. We also found that Hispanic servicemembers in the Air Force were less likely to be subject to nonjudicial punishment, but we observed no statistically significant difference for Hispanic servicemembers in the Marine Corps. Servicemembers in the Other race category were less likely to be subject to nonjudicial punishment than servicemembers of all other races in the Marine Corps and the Air Force. Rate and Likelihood of Conviction in General and Special Courts-Martial As shown in figure 21 below, we found that Black servicemembers were convicted in general and special courts-martial at a lower rate in the Army and the Air Force, and at an equal rate in the Navy and the Marine Corps compared to their proportion of the overall general and special courts- martial population. In the Army, the Navy, and the Marine Corps, Hispanic servicemembers were convicted in general and special courts- martial at an equal rate compared to their proportion of the overall general and special courts-martial population. Compared to their proportion of the overall general and special courts-martial population, Hispanic servicemembers were convicted at a lower rate in the Air Force. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. As shown in figure 21 above, bivariate regression analyses found that, in the Army, White servicemembers were statistically significantly more likely to be convicted, whereas Black servicemembers were statistically significantly less likely to be convicted in general and special courts- martial compared to all other servicemembers. White servicemembers in the Air Force were also statistically significantly more likely to be convicted in general and special courts-martial compared to all other servicemembers. In the Marine Corps, we found that males were more likely to be convicted than females, whereas in the Air Force, males were less likely to be convicted than females. The remaining odds ratios shown in figure 21 above were not statistically significant. Rate and Likelihood of More Severe Punishment As shown in figures 22 and 23 below, we found that Black servicemembers received a more severe punishment at a lower rate compared to their share of the convicted service population in the Army, the Navy, and the Air Force. We also found that Hispanic servicemembers received a more severe punishment at a lower rate compared to their share of the convicted service population in the Air Force, but at a higher rate in the Marine Corps. We found that male servicemembers in the Marine Corps and the Air Force received a more severe punishment at a higher rate, and at the same rate in the Army and the Navy, compared to their share of the convicted service population. Females received a more severe punishment at a lower rate in the Air Force and the Marine Corps, and at the same rate in the Army and the Navy, compared to their share of the convicted service population. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. The bivariate regression analyses in Figures 22 and 23 above calculated the degree to which one racial or gender group was more likely or less likely than another racial or gender group to be dismissed or discharged after a conviction in general and special courts-martial. In the Navy, we found that Black servicemembers were statistically significantly less likely to be dismissed or discharged after conviction in general and special courts-martial compared to all other servicemembers. We found no statistically significant differences regarding minority servicemembers being more likely or less likely to be dismissed or discharged after conviction in general and special courts-martial in the Marine Corps, or to receive a more severe punishment in the Army or the Air Force. We found that males in the Marine Corps and the Air Force were more likely to be dismissed or discharged or receive a more severe punishment after conviction than females, but we did not find any statistically significant differences regarding male servicemembers in the Army or the Navy. Appendix III: Analysis of Drug Offenses, Sexual Assault Offenses, and All Other Offenses This appendix contains several figures that show the underlying data related to drug and sexual assault offenses from fiscal years 2013 through 2017 for the Army, the Navy, the Marine Corps, and the Air Force. Across most military services, Black, Hispanic, and male servicemembers were the subjects of recorded investigations and tried in general and special courts-martial at higher rates than their shares of the overall service population for drug offenses, sexual assault offenses, and all other offenses. We found that the likelihood of conviction varied among the services for these two offenses. We analyzed these two specific Uniform Code of Military Justice (UCMJ) offenses separately from all other offenses because service officials told us that an investigation into these offenses may frequently be mandatory, and thus could potentially mitigate the risk of bias. We analyzed data for these offenses for recorded investigations, trials in general and special courts-martial, and convictions from fiscal years 2013 through 2017 to assess the extent to which racial and gender disparities may exist. Our analyses of the services’ investigation, military justice, and personnel databases, as reflected in these figures, taken alone, do not establish the presence or absence of unlawful discrimination. Recorded Investigations of Drug and Sexual Assault Offenses We identified racial and gender differences in recorded investigation rates for drug offenses, sexual assault offenses, and all other offenses compared with the total service populations. Our analysis focused on alleged UCMJ violations for these offenses that were recorded in the Military Criminal Investigative Organization (MCIO) investigations databases. Other investigations conducted within the military, such as command investigations, were not considered in this analysis. For example, as shown in figure 24 below, Black servicemembers were the subjects of recorded investigations for drug offenses, sexual assault offenses, and all other offenses at a higher rate than their share of the overall service population across all military services. Hispanic servicemembers were the subjects of recorded investigations for drug offenses, sexual assault offenses, and all other offenses at a higher rate than their share of the overall service population in the Air Force, but were the subjects of recorded investigations for drug offenses at a lower rate than their share of the overall service population in both the Army and the Marine Corps. Male servicemembers were the subjects of recorded investigations for drug offenses and sexual assault offenses at a higher rate than their share of the overall service population across all of the military services. General and Special Courts-Martial Trials for Drug and Sexual Assault Offenses We found that White servicemembers were tried for drug offenses, sexual assault offenses, and all other offenses in general and special courts- martial at lower rates than their share of the overall service population across all of the military services. Black servicemembers were tried for drug offenses, sexual assault offenses, and all other offenses in general and special courts-martial at a higher rate than their share of the overall service population in all of the military services. Hispanic servicemembers were tried for drug offenses in general and special courts-martial at a lower rate in the Navy and the Marine Corps, and at a higher rate in the Air Force, compared to their share of the overall service population. Hispanic servicemembers were tried for sexual assault offenses at a higher rate than their proportion of the overall service population in all of the military services. Female servicemembers were tried for drug offenses, sexual assault offenses, and all other offenses in general and special courts-martial at lower rates than their share of the general service population in the Army, the Navy, and the Air Force, and were tried for sexual assault offenses and all other offenses at lower rates than their share of the overall service population in the Marine Corps. Figure 25 below shows the gender and racial composition of general and special court-martial trials for drug offenses, sexual assault offenses, and all other offenses. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. Likelihood of Conviction for Drug and Sexual Assault Offenses We conducted multivariate regression analyses to calculate the degree to which servicemembers charged with drug offenses and sexual assault offenses were more likely or less likely than a composite variable comprised of all other offenses to be convicted in general and special courts-martial, while controlling for other attributes, such as race, gender, education, and rank. As shown in figure 26 below, we did not identify any statistically significant difference in conviction rates for drug offenses compared to all other offenses in the Army, the Navy, the Marine Corps, and the Air Force. Sexual assault offenses were less likely to result in a conviction in the Army, the Navy, and the Air Force, and there was no statistically significant difference for the Marine Corps. We could not analyze Coast Guard cases due to the small number of general and special courts-martial adjudicated in the Coast Guard from fiscal years 2013 through 2017. Appendix IV: Army Data and Analyses This appendix contains several tables that show the underlying data and analyses used throughout this report relating to Army personnel and military justice disciplinary actions from fiscal years 2013 through 2017. We did not include populations that contained fewer than 20 servicemembers in the total populations presented in these tables to ensure the protection of sensitive information. As a result, the total populations presented in this appendix may vary among the different tables and may vary from the total populations presented in the body of the report. Our analyses of the Army’s investigations, military justice, and personnel databases, as reflected in these tables, taken alone, do not establish the presence or absence of unlawful discrimination. Multivariate Regression Analyses of Army Data The multivariate results listed below in table 17 show the odds ratios for the multivariate regression analyses of the Army data. We used logistic regression to assess the relationship between the independent variables, such as race, education, rank, or gender, with the probability of being subject to a military justice action. Logistic regression allows for the coefficients to be converted into odds ratios. Odds ratios that are statistically significant and greater than 1.00 indicate that individuals with that characteristic are more likely to be subject to a military justice action. For example, an odds ratio of 1.55 for Black servicemembers would mean that they are 1.55 times more likely to be subject to a military justice action compared to White servicemembers. Odds ratios that are statistically significant and lower than 1.00 indicate that individuals with that characteristic are less likely to be subject to a military justice action. We excluded years of service from the Army analyses due to high correlation with the rank variable. Appendix V: Navy Data and Analyses This appendix contains several tables that show the underlying data and analyses used throughout this report relating to Navy personnel and military justice disciplinary actions from fiscal years 2013 through 2017. We did not include populations that contained fewer than 20 servicemembers in the populations presented in these tables to ensure the protection of sensitive information. As a result, the populations presented in this appendix may vary among the different tables and may vary from the populations presented in other places in this report. Our analyses of the Navy’s investigations, military justice, and personnel databases, as reflected in these tables, taken alone, do not establish the presence or absence of unlawful discrimination. Multivariate Regression Analyses of Navy Data The multivariate results listed below in table 26 show the odds ratios for the multivariate regression analyses of Navy data. We used logistic regression to assess the relationship between the independent variables, such as race, education, rank, or gender, with the probability of being subject to a military justice action. Logistic regression allows for the coefficients to be converted into odds ratios. Odds ratios that are statistically significant and greater than 1.00 indicate that individuals with that characteristic are more likely to be subject to a military justice action. For example, an odds ratio of 1.55 for Black servicemembers would mean that they are 1.55 times more likely to be subject to a military justice action compared to White servicemembers. Odds ratios that are statistically significant and lower than 1.00 indicate that individuals with that characteristic are less likely to be subject to a military justice action. We excluded age and years of service from the Navy multivariate regression analyses due to high correlation with the rank variable. Appendix VI: Marine Corps Data and Analyses This appendix contains several tables that show the underlying data and analyses used throughout this report relating to Marine Corps personnel and military justice disciplinary actions from fiscal years 2013 through 2017. We did not include populations that contained fewer than 20 servicemembers in the populations presented in these tables to ensure the protection of sensitive information. As a result, the populations presented in this appendix may vary among the different tables and may vary from the populations presented in other places in this report. Our analyses of the Marine Corps investigations, military justice, and personnel databases, as reflected in these tables, taken alone, do not establish the presence or absence of unlawful discrimination. Multivariate Regression Analyses of Marine Corps Data The multivariate results listed below in table 35 show the odds ratios for the multivariate regression analyses of Marine Corps data. We used logistic regression to assess the relationship between the independent variables, such as race, education, rank, or gender, with the probability of being subject to a military justice action. Logistic regression allows for the coefficients to be converted into odds ratios. Odds ratios that are statistically significant and greater than 1.00 indicate that individuals with that characteristic are more likely to be subject to a military justice action. For example, an odds ratio of 1.55 for Black servicemembers would mean that they are 1.55 times more likely to be subject to a military justice action compared to White servicemembers. Odds ratios that are statistically significant and lower than 1.00 indicate that individuals with that characteristic are less likely to be subject to a military justice action. We excluded age and years of service from the Marine Corps multivariate regression analyses due to high correlation with the rank variable. Appendix VII: Air Force Data and Analyses This appendix contains several tables that show the underlying data and analyses used throughout this report relating to Air Force personnel and military justice disciplinary actions from fiscal years 2013 through 2017. We did not include populations that contained fewer than 20 servicemembers in the populations presented in these tables to ensure the protection of sensitive information. As a result, the populations presented in this appendix may vary among the different tables and may vary from the populations presented in other places in this report. Our analyses of the Air Force’s investigations, military justice, and personnel databases, as reflected in these tables, taken alone, do not establish the presence or absence of unlawful discrimination. Multivariate Regression Analyses of Air Force Data The multivariate results listed below in table 45 show the odds ratios for the multivariate regression analyses of Air Force data. We used logistic regression to assess the relationship between the independent variables, such as race, education, rank, or gender, with the probability of being subject to a military justice action. Logistic regression allows for the coefficients to be converted into odds ratios. Odds ratios that are statistically significant and greater than 1.00 indicate that individuals with that characteristic are more likely to be subject to a military justice action. For example, an odds ratio of 1.55 for Black servicemembers would mean that they are 1.55 times more likely to be subject to a military justice action compared to White servicemembers. Odds ratios that are statistically significant and lower than 1.00 indicate that individuals with that characteristic are less likely to be subject to a military justice action. We controlled for years of service among the lower enlisted ranks (E1- E4), but excluded age from the Air Force multivariate regression analyses due to high correlation with the rank and years of service variables. Appendix VIII: Coast Guard Data and Analyses This appendix contains several tables that show the underlying data and analyses used throughout this report relating to Coast Guard personnel and military justice disciplinary actions from fiscal years 2013 through 2017. We did not include populations that contained fewer than 20 servicemembers in the populations presented in these tables to ensure the protection of sensitive information. As a result, the populations presented in this appendix may vary among the different tables and may vary from the populations presented in other places in this report. Our analyses of the Coast Guard’s investigations, military justice, and personnel databases, as reflected in these tables, taken alone, do not establish the presence or absence of unlawful discrimination. Multivariate Regression Analyses of Coast Guard Data The multivariate results listed below in table 52 show the odds ratios for the multivariate regression analyses of Coast Guard data. We used logistic regression to assess the relationship between the independent variables, such as race, education, rank, or gender, with the probability of being subject to a military justice action. Logistic regression allows for the coefficients to be converted into odds ratios. Odds ratios that are statistically significant and greater than 1.00 indicate that individuals with that characteristic are more likely to be subject to a military justice action. For example, an odds ratio of 1.55 for Black servicemembers would mean that they are 1.55 times more likely to be subject to a military justice action compared to White servicemembers. Odds ratios that are statistically significant and lower than 1.00 indicate that individuals with that characteristic are less likely to be subject to a military justice action. We excluded age and years of service from the Coast Guard analyses due to high correlation with the rank variable. Appendix IX: Key Indicators for Military Justice Actions We found that age, rank, length of service, and education were indicators of a servicemember’s likelihood of being the subject of a recorded investigation, court-martial, or nonjudicial punishment across the military services. To analyze age, rank, length of service, and education, we used bivariate regression analyses to determine which sub-population of each attribute was most likely to be subject to a recorded investigation, court-martial, or nonjudicial punishment. This appendix contains several tables that show the rank, education, length of service, and age groups most likely to be subject to a recorded investigation, tried in general and special courts-martial, tried in summary court-martial, and receive a nonjudicial punishment for all services from fiscal years 2013 through 2017. For the Coast Guard, we could not analyze age, rank, length of service, and education as indicators for courts-martial or nonjudicial punishment due to the small number of recorded military justice cases from fiscal years 2013 through 2017. Our analyses of the services’ investigations, military justice, and personnel databases, as reflected in these tables, taken alone, do not establish the presence or absence of unlawful discrimination. Appendix X: Comments from the Department of Defense Appendix XI: Comments from the Department of Homeland Security Appendix XII: GAO Contact and Staff Acknowledgments GAO Contact Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. Staff Acknowledgments In addition to the contact named above, key contributors to this report were Kimberly C. Seay, Assistant Director; Parul Aggarwal; Christopher Allison; Renee S. Brown; Vincent M. Buquicchio; Won (Danny) Lee; Amie M. Lesser; Serena C. Lo; Dae B. Park; Samuel J. Portnow; Clarice Ransom; Christy D. Smith; Preston Timms; and Schuyler Vanorsdale.
Why GAO Did This Study The Uniform Code of Military Justice (UCMJ) was established to provide a statutory framework that promotes fair administration of military justice. Every active-duty servicemember is subject to the UCMJ, with more than 258,000 individuals disciplined from fiscal years 2013-2017, out of more than 2.3 million unique active-duty servicemembers. A key principle of the UCMJ is that a fair and just system of military law can foster a highly disciplined force. House Report 115-200, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018, included a provision for GAO to assess the extent that disparities may exist in the military justice system. This report assesses the extent to which (1) the military services collect and maintain consistent race, ethnicity, and gender information for servicemembers investigated and disciplined for UCMJ violations that can be used to assess disparities, and (2) there are racial and gender disparities in the military justice system, and whether disparities have been studied by DOD. GAO analyzed data from the investigations, military justice, and personnel databases from the military services, including the Coast Guard, from fiscal years 2013-2017 and interviewed agency officials. What GAO Found The military services collect gender information, but they do not collect and maintain consistent information about race and ethnicity in their investigations, military justice, and personnel databases. This limits their ability to collectively or comparatively assess these data to identify any disparities (i.e., instances in which a racial, ethnic, or gender group was overrepresented) in the military justice system within and across the services. For example, the number of potential responses for race and ethnicity across the military services' databases ranges from five to 32 options for race and two to 25 options for ethnicity, which can complicate cross-service assessments. The services also are not required to and, thus, do not report demographic information in their annual military justice reports—information that would provide greater visibility into potential disparities. GAO's analysis of available data found that Black, Hispanic, and male servicemembers were more likely than White or female members to be the subjects of investigations recorded in databases used by the military criminal investigative organizations, and to be tried in general and special courts-martial in all of the military services when controlling for attributes such as rank and education. GAO also found that race and gender were not statistically significant factors in the likelihood of conviction in general and special courts-martial for most services, and minority servicemembers were either less likely to receive a more severe punishment than White servicemembers or there was no difference among racial groups; thus, disparities may be limited to particular stages of the process. The Department of Defense (DOD) has taken some steps to study disparities, but has not comprehensively evaluated the causes of racial or gender disparities in the military justice system. Doing so would better position DOD to identify actions to address disparities and help ensure the military justice system is fair and just. Note: These analyses, taken alone, should not be used to make conclusions about the presence or absence of unlawful discrimination. These multivariate regression analysis results estimate whether a racial or gender group is more likely or less likely to be the subject of an investigation or a trial in general or special courts-martial after controlling for race, gender, rank, and education, and in the Air Force, years of service. GAO made all racial comparisons to White servicemembers and all gender comparisons to females. GAO grouped individuals of Hispanic ethnicity together, regardless of race. What GAO Recommends GAO is making 11 recommendations, including that the services develop the capability to present consistent race and ethnicity data, and DOD include demographic information in military justice annual reports and evaluates the causes of disparities in the military justice system. DOD and the Coast Guard generally concurred with GAO's recommendations.
gao_GAO-20-483T
gao_GAO-20-483T_0
The Biodefense Strategy Provides Opportunity to Create an Enterprise-Wide Approach, but Implementation Challenges Remain We found that the National Biodefense Strategy and associated plans bring together all the key elements of federal biodefense capabilities, which presents an opportunity to identify gaps and consider enterprise- wide risk and resources for investment trade-off decisions. However, challenges with planning to manage change; limited guidance and methods for analyzing capabilities; and lack of clarity about decision- making processes, roles, and responsibilities while adapting to a new enterprise-wide approach could limit the success of the Strategy’s implementation. Framework Created to Assess Enterprise-Wide National Biodefense Capabilities The National Biodefense Strategy and its associated plans bring together the efforts of federal agencies with significant biodefense roles, responsibilities, and resources to address naturally-occurring, accidental, and intentional threats. The Strategy and plans also provide processes for collecting and analyzing comprehensive information across the enterprise, an important step toward the kind of enterprise-wide strategic decision-making we have called for. The Strategy defines the term “biothreat” broadly to include all sources of major catastrophic risk, including naturally-occurring biological threats, the accidental release of pathogens, and the deliberate use of biological weapons. Officials we interviewed noted that this is the first time that the federal government has identified activities across the whole biodefense enterprise and assessed resources and gaps to address multiple sources of threat regardless of source. The Strategy also outlines high-level goals and objectives to help define priorities. NSPM-14, which was issued to support the strategy, established a structure and process by which federal agencies can assess enterprise-wide biodefense capabilities and needs, and subsequently develop guidance to help inform agency budget submissions. NSPM-14 lays out, in broad strokes, a process to identify biodefense efforts and assess how current resources support the Strategy, how existing programs and resources could better align with the Strategy, and how additional resources, if available, could be applied to support the goals of the Strategy. As shown in figure 1, this process begins through a data call with participating agencies documenting all biodefense programs, projects, and activities within their purview in a biodefense memorandum. In interviews, officials from participating agencies stated that the NSPM- 14 processes constitute a new approach to identifying gaps and setting budget priorities for biodefense, and that they viewed the approach as generally well designed. Additionally, agency officials said that the assessment and joint policy guidance development process outlined in NSPM-14 offered some promise for helping agencies identify the resources necessary to achieve the Strategy’s goals. Nevertheless, officials from all of the agencies we interviewed, even those with the most optimistic views on the leadership and governance structure design, tempered their responses with the caveat that implementation is in such early stages that it remains to be seen how effective these structures will actually be once tested. Implementation Challenges Remain In our February 2020 report, we also identified challenges that if not addressed could hinder enterprise-wide biodefense efforts. Specifically, although the Strategy and associated plans establish the foundation for enterprise risk management, we and biodefense agency officials identified multiple challenges that could affect the Strategy’s implementation. These include challenges individual agencies faced during the initial data collection process as well as a lack of planning and guidance to support an enterprise-wide approach. In our analyses and interviews, we found that parts of the process in the first year were underdeveloped, raising questions about (1) the plans to support change management practices and ensure that early-implementation limitations do not become institutionalized in future years’ efforts; (2) guidance and methods for meaningfully analyzing the data; and (3) the clarity of decision-making processes, roles, and responsibilities. Challenges adapting to new procedures. During our interviews, agency officials reported challenges they faced in the first-year’s data collection effort. These challenges may have led to incomplete data collection, but are not wholly unexpected given they occurred in the context of the individual agencies and officials adapting to new procedures and a broader cultural shift from how they have approached their biodefense missions in the past. Officials told us that because of the learning involved the first time through the process, agencies may not have submitted complete or detailed information about their biodefense programs. Some officials we interviewed voiced concern that this first-year effort could set a poor precedent for these activities in future years if the challenges are not acknowledged and addressed. For example, an official noted that committing to the first-year’s results as the “baseline” for future years of the Strategy’s implementation could compound or institutionalize the issues encountered in the first year. Officials from HHS and Office of Management and Budget staff stressed that this process will be iterative, with the first year being primarily about outlining the existing biodefense landscape. Our prior work on organizational transformations states that incorporating change management practices improves the likelihood of successful reforms and notes that it is important to recognize agency cultural factors that can either help or inhibit reform efforts. However, the agencies involved in implementing the Strategy do not have a plan that includes change management practices that can help prevent these challenges from being carried forward into future efforts, and help reinforce enterprise-wide approaches, among other things. To address this issue, we recommended the Secretary of HHS direct the Biodefense Coordination Team to establish a plan that includes change management practices—such as strategies for feedback, communication, and education—to reinforce collaborative behaviors and enterprise-wide approaches and to help prevent early implementation challenges from becoming institutionalized. HHS concurred with this recommendation. Guidance and methods for analyzing data. We found a lack of clear procedures and planning to help ensure that the Biodefense Coordination Team is prepared to analyze the data, once it has been collected, in a way that leads to recognition of meaningful opportunities to leverage resources in efforts to maintain and advance national biodefence capabilities. In particular, HHS (1) has not documented guidance and methods for analyzing the data, including but not limited to methods and guidance for how to account for the contribution of nonfederal capabilities; and (2) does not have a resource plan for staffing and sustaining ongoing efforts. Specifically, we found that the processes for the Biodefense Coordination Team to analyze the results of all the individual agency data submissions and identify priorities to guide resource allocation were not agreed upon or documented prior to the agency efforts and continue to lack specificity and transparency. In our interviews, officials from four agencies said they were uncertain about fundamental elements of the implementation process, including how information gathered will be used to identify gaps and set priorities. Additionally, the initial effort to collect information on all programs, projects, and activities focused on existing federal activities and did not include a complete assessment of biodefense capabilities at the nonfederal level ̶ capabilities needed to achieve the goals and objectives outlined in the Strategy. Officials we interviewed also expressed concern about the resources that the Biodefense Coordination Team had available to it, both in the first year and on an ongoing basis. The officials told us that not all agencies were able to provide a full-time detailee to help support the team. We have previously reported that agencies need to identify how interagency groups will be funded and staffed. However, officials from multiple agencies told us that the initial planning for the staffing and responsibilities for the Biodefense Coordination Team had not been finalized. Without a plan to help ensure sufficient resources and mitigate resource challenges for ongoing efforts, the Biodefense Coordination Team risks not having the capacity it needs to conduct meaningful analysis, which would undermine the vision created by the Strategy and NSPM-14. To address these issues, we recommended the Secretary of HHS direct the Biodefense Coordination Team to (1) clearly document guidance and methods for analyzing the data collected from the agencies, including ensuring that nonfederal resources and capabilities are accounted for in the analysis, and (2) establish a resource plan to staff, support, and sustain its ongoing efforts. HHS concurred with both recommendations. Roles and responsibilities for joint decision-making. The governing bodies overseeing the National Biodefense Strategy’s implementation— the Biodefense Steering Committee and Biodefense Coordination Team—did not clearly document key components of the assessment process and roles and responsibilities for joint decision-making in the first year of NSPM-14 implementation. This raises questions about how these bodies will move from an effort to catalog all existing activities to decision- making that accounts for enterprise-wide needs and opportunities. For example, officials from multiple agencies were not certain how the governing bodies would make joint decisions regarding priority-setting and the allocation of resources, how they would assign new biodefense responsibilities if gaps were identified, and to what extent the Biodefense Steering Committee could enforce budgetary priorities, if at all. We also found a lack of shared understanding and agreement about how the interagency process would work to align resources toward any identified gaps and reconfigure resources for any identified redundancies or inefficiencies. Additionally, we found that Presidential memorandums guiding the process did not detail specific decision-making principles or steps for reaching consensus or even for raising decision points about how to best leverage or direct resources across the enterprise in response to any gaps or inefficiencies. Similarly, agency officials we interviewed were not clear how this process would work, how decisions would be made, or how agencies would agree to take on new responsibilities to bridge gaps to achieve the Strategy’s goals. Further, the governing bodies have not fully defined the roles and responsibilities for making enterprise-wide decisions that affect individual agency budgets and for enforcing enterprise-wide budget priorities. As with other parts of the NSPM-14 implementation process, the details regarding specific roles and responsibilities for directing and enforcing budget decisions lack detail and specificity. Additionally, officials from four agencies stated that the charter for the Biodefense Coordination Team has not been finalized, further delaying the articulation of roles and responsibilities and the ability to establish a shared agenda and common operating picture. As a result, some officials remain skeptical of the effectiveness of any decisions made. We previously reported that effective national strategies should help clarify implementing organizations’ relationships in terms of leading, supporting, and partnering. In the context of the Strategy, that includes how enterprise-wide decisions about leveraging or directing resources to fill gaps and reduce inefficiency will be made and by whom. Similarly, our previous work has found that articulating and agreeing to a process for making and enforcing decisions and clarifying roles and responsibilities can improve the clarity surrounding a shared outcome, and that articulating these agreements in formal documents can strengthen agency commitment to working collaboratively and provide the overall framework for accountability and oversight. Uncertainty around the mechanisms to identify enterprise-wide priorities along with the lack of clearly documented and agreed upon processes, roles, and responsibilities for joint decision-making jeopardize the Strategy’s ability to enhance efficiency and effectiveness of the nation’s biodefense capabilities. To address this issue, we recommended that the Secretary of HHS direct the Biodefense Coordination Team to clearly document agreed upon processes, roles, and responsibilities for making and enforcing enterprise-wide decisions. HHS concurred. In conclusion, the current COVID-19 outbreak demonstrates that responding to the ever-changing nature and broad array of biological threats is challenging. The National Biodefense Strategy calls for the need to improve state, local, tribal, territorial, private sector, federal, regional, and international surveillance systems and networks to contain, control and respond to biological incidents. As the current coronavirus outbreak continues to cross regional and international borders, the federal government must take necessary steps to protect the American public. At the same time, we must not lose sight of the next threat. The National Biodefense Strategy and NSPM-14 put in place a framework to be able to assess threats and make difficult decisions about how to apply limited resources to achieve the best benefit. However, the Strategy is only as good as its implementation. Taking the necessary steps to address the recommendations we have made regarding managing this cultural change, analyzing data, ensuring sufficient resources to maintain implementation efforts, and clearly articulating roles and responsibilities for joint decision-making will better position our nation for the threats we face today and in the future. Chairwoman Maloney, Ranking Member Jordan, and Members of the Committee, this concludes our prepared statement. We would be happy to respond to any questions you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff has any questions concerning this testimony, please contact Christopher P. Currie at (404) 679-1875, CurrieC@gao.gov or Mary Denigan-Macauley at (202) 512-7114, DeniganMacauleyM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kathryn Godfrey (Assistant Director), Susanna Kuebler (Analyst- In-Charge), Michele Fejfar, Eric Hauswirth, Tracey King, and Jan Montgomery. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study GAO has reported on the inherent fragmented nature of the federal and nonfederal resources needed to protect the nation from potentially catastrophic biological threats. GAO called for a strategic approach to help the federal government better leverage resources and manage risk The White House issued the National Biodefense Strategy and the Presidential Memorandum on the Support for National Biodefense to promote a more efficient and coordinated biodefense enterprise. The National Defense Authorization Act for Fiscal Year 2017 included a provision that GAO review the strategy. This testimony highlights key findings from our February 2020 report, which analyzed the extent to which the Strategy and related implementation efforts are designed to allow an enterprise-wide approach. What GAO Found Issued in September 2018, the National Biodefense Strategy (Strategy) and implementation plan, along with National Security Presidential Memorandum-14 (NSPM-14), are designed to enhance national biodefense capabilities. NSPM-14 established a governance structure composed of relevant federal agencies and chaired by the Secretary of Health and Human Services (HHS) to guide implementation. It also required federal agencies with biodefense responsibilities to collect and assess data on their biodefense activities to, among other things, identify gaps. There are a number of challenges, however, that could limit long-term implementation success. Among other things, there was no documented methodology or guidance for how data are to be analyzed to help the enterprise identify gaps and opportunities to leverage resources, including no guidance on how nonfederal capabilities are to be accounted for in the analysis. Agency officials were also unsure how decisions would be made, especially if addressing gaps or opportunities to leverage resources involved redirecting resources across agency boundaries. Although HHS officials pointed to existing processes and directives for interagency decision making, GAO found there are no clear, detailed processes, roles, and responsibilities for joint decision-making, including how agencies will identify opportunities to leverage resources or who will make and enforce those decisions. As a result, questions remain about how this first-year effort to catalogue all existing activities will result in a decision-making approach that involves jointly defining and managing risk at the enterprise level. Without clearly documented methods, guidance, processes, and roles and responsibilities for enterprise-wide decision-making, the effort runs the risk of failing to move away from traditional mission stovepipes toward a strategic enterprise-wide approach that meaningfully enhances national capabilities. What GAO Recommends In the February 2020 report, GAO made four recommendations to the Secretary of HHS, including working with other agencies to document methods for analysis and the processes, roles, and responsibilities for enterprise-wide decision making. HHS concurred with all the recommendations and described steps to implement them.
gao_GAO-20-184T
gao_GAO-20-184T_0
Background The 8(a) program is designed to assist small, disadvantaged businesses in competing in the American economy through business development. Over the course of the program, qualified small, disadvantaged businesses can receive business development support from SBA, such as mentoring, procurement assistance, business counseling, training, financial assistance, surety bonding, and other management and technical assistance. One of the key areas of support is eligibility for competitive and sole-source federal contracts that are set aside for 8(a) businesses, which can be an important factor of the financial development for ANC-owned firms. Oversight and monitoring of all firms participating in the 8(a) program are delegated to each of SBA’s 68 district offices nationwide. Of its 68 district offices—staff at the Alaska District Office were assigned and oversaw the majority of all participating ANC-owned firms. ANCs and ANC-owned firms have a unique status in the 8(a) program and can enter into complex business arrangements In terms of their organizational structures, ANCs can be either for-profit or not-for-profit and can own a family of for-profit subsidiary firms, including but not limited to, wholly owned holding companies that often provide administrative support to smaller sister ANC-owned firms. As a condition of the 8(a) program, participating ANC-owned firms must be for-profit. Generally, ANC-owned firms can remain in the 8(a) program for up to 9 years, provided they maintain their eligibility. During the first four “developmental” years, participating firms may be eligible for assistance in program areas including sole-source and competitive 8(a) contract support, and training in business capacity development and strategies to compete successfully for both 8(a) and non-8(a) contracts, among other things. In the last 5 years, firms prepare to transition out of the program, and are required to obtain a certain percentage of non-8(a) revenue to demonstrate their progress in developing into a viable business that is not solely reliant on the 8(a) program. SBA Has Faced Long-Standing Weaknesses in Its Oversight and Monitoring of Tribal Firms’ Compliance with 8(a) Program Requirements Across three reports on SBA’s 8(a) program, we have found persistent weaknesses in the oversight and monitoring of participating Tribal firms, in particular ANC-owned firms. Specifically, we found that SBA had (1) incomplete information and documentation on ANC-owned firms’ compliance with regulatory requirements; (2) limitations in its ability to track and share key program data needed to enforce revenue rules of Tribal firms, including ANC-owned firms; (3) insufficient staffing in its Alaska District Office to carry out necessary and critical monitoring tasks of ANC-owned firms; and (4) inadequate program guidance for clearly communicating to staff how to interpret new regulations. Incomplete information and documentation on ANC-owned firms and their compliance with regulations: We reported in 2016 that during a 2014 site visit to the Alaska District Office, we noted that incomplete information and documentation limited SBA’s oversight of the regulatory requirements specific to ANC-owned firms we examined. For example, SBA faced significant challenges in providing us with very basic information on ANC-owned firms, such as the total number of firms serviced by the agency. For example, during the course of our review, it took 3 months for SBA to provide us with a list of ANC-owned firms in the 8(a) program, and on three separate occasions SBA officials provided three separate numbers for the total number of ANC-owned firms— ranging from 226 to 636. We noted in our 2016 report that SBA’s inability to account for and make available principal information on all of the ANC- owned firms participating in the program raises concerns about the integrity of the agency’s internal controls and ability to provide effective and sustained oversight. As another example, we reported in 2016 that SBA was unable to provide seven of 30 required agency offer letters for 8(a) contracts that we requested for our review of contracts that may have been follow-on, sole- source contracts. According to the regulation, these required offer letters are critical documents that could have assisted SBA staff in understanding a contract’s acquisition history and any small business that performed this work prior to any subsequent awards. Once an applicant is admitted to the 8(a) program, it may not receive an 8(a) sole-source contract that is also a follow-on contract to an 8(a) contract that was performed “immediately previously” by another 8(a) program participant (or former participant) owned by the same ANC. We found that SBA’s inability to enforce the regulatory prohibition against follow-on, sole- source contracts was directly tied to the quality of the documentation it collected from contracting agencies. While we found that one program official in the Alaska District Office took steps during our 2016 review to ask agencies to specifically report whether contracts are follow-on, sole- source awards in offer letters, we have no evidence supporting that this practice was more broadly adopted by the program as a whole. Ultimately, we recommended and SBA agreed to enhance its internal controls and oversight of ANC-owned firms in the 8(a) program by ensuring that all ANC-owned firm files contain all relevant documents and information and providing additional guidance and training to SBA staff on the enforcement of related policies, among other things. Limitations in tracking and sharing key program data needed to enforce 8(a) revenue rules: In all three reports mentioned in this testimony, we found that SBA faced limitations in tracking information on the primary revenue generators for Tribal firms, including ANC-owned firms, to ensure that multiple firms under one parent ANC are not generating their revenue in the same primary line of business—that is, expressed as and operating under the same North American Industry Classification System (NAICS) code—which SBA’s regulation intends to limit. As discussed later in this testimony, we first identified this issue in our 2006 report, noting that SBA was not effectively tracking ANC-owned firms’ revenue data to ensure that the sister firms were not generating the majority of revenue in the same line of business. We recommended that SBA collect information on the participation of 8(a) ANC-owned firms as part of required overall 8(a) monitoring, to include tracking the primary revenue generators for ANC-owned firms and to ensure that multiple subsidiaries under one ANC are not generating their revenue in the same primary line of business. Then in our 2012 report, we found that SBA had not addressed this limitation and recommended that SBA develop a system that had the capability to track revenues from ANC-owned firms’ primary and secondary lines of business to ensure that ANC-owned firms under the same parent ANC are not generating the majority of their revenue from the same primary line of business. In our 2016 report, we found that SBA still had not developed such a system and thus was not effectively tracking and sharing the type of revenue information needed to ensure 8(a) ANC-owned firms are following the intent of 8(a) revenue rules. For example, we found that without such a system, sister ANC-owned firms owned by the same ANC could circumvent the intent of the prohibition. In particular, one sister ANC-owned firm could generate a greater portion of revenues under its secondary line of business that another sister ANC-owned firm is using as its primary line of business. Although this type of activity is not prohibited, we determined that if such activity is left untracked, a firm’s secondary line of business could effectively become its primary revenue source in the same line of business that its sister firm claims for its primary line of business without actually violating SBA’s regulation. During our 2016 review, we found 5 pairs of ANC-owned firms participating in the 8(a) program from fiscal years 2011 through 2014 that concurrently generated millions of dollars in the same line of business as their sister ANC-owned firm’s primary line of business, while generating less or no revenue under their own primary line of business. As we found then, such activity could, intentionally or not, potentially circumvent the intent of SBA’s prohibition, and as discussed later, we recommended that SBA take action to prevent ANC-owned firms from circumventing this rule. Figure 1 below illustrates one example we reported on in our 2016 report. Insufficient staffing levels in SBA’s Alaska District Office: In our 2006 report, we noted that SBA lacked adequate staffing levels in the Alaska District Office—a district office responsible for the oversight of the majority of ANC-owned firms. Our reports, and a 2008 report issued by the SBA’s Office of the Inspector General, have shown that inadequate staffing was a long-standing challenge and a consistent weakness that directly contributed to SBA’s inability to provide adequate oversight. In our 2012 report, we noted that ANC-owned firms could quickly outgrow the program. It should be noted that we recommended that SBA evaluate its staffing levels in 2006, and in our 2016 report, we found that the staffing challenges persisted. As a result, we found that SBA needed a sustained and comprehensive approach to staffing its Alaska District Office in order to conduct sufficient oversight of ANC-owned firm activities. We were told that frequent staff turnover directly contributed to the limited number of staff in the Alaska District Office with ANC firm expertise—limiting their ability to conduct effective and timely oversight of the ANC-owned firms participating in the program. An SBA official told us at the time that the optimum number of staff for the Alaska District Office was five with no more than 100 assigned 8(a) firm files each; however, that office had 1.5 staff responsible for about 200 files each. We found, based on SBA documentation and observation during our site visit to Alaska that, because of this staffing shortage, supervisory review of contract monitoring activities and annual reviews fell behind, resulting in a backlog of oversight duties related to ANC-owned firms. In 2016, we found that SBA took some short-term actions to address the issues that we identified, such as temporarily redistributing the management of ANC-owned firm files across several other district offices and within the Alaska District Office. As for long-term action, SBA officials provided us with documentation describing the program’s long-term staffing strategy, which included succession planning and managing attrition. For example, SBA planned to hire four additional BOS, and an attorney who understands ANCs. At that time, SBA began implementing its staffing strategy by hiring additional business opportunity specialists for its Alaska District Office. However, we have not evaluated whether the agency implemented the remainder of its strategy for succession planning and managing attrition. Inadequate program guidance: We reported that SBA lacked program guidance that could have assisted the Alaska District Office in improving staff’s knowledge of program rules and monitoring practices. We initially raised our concern about the need for strong guidance in 2006 given the unique status in the 8(a) program and relationships entered into by ANC- owned firms. For our 2012 report, SBA officials told us that it was in the process of updating its program guidance for the program. However, in our 2016 report, we similarly found that staff lacked sufficient guidance and training on key program regulations and internal monitoring practices, and concluded that resulting inconsistent supervisory review of ANC transactions and related documentation increased SBA’s vulnerability to compliance and fraud risks. Several months after we issued our report in 2016, SBA issued updated standard operating procedures on program rules that address the 2011 regulatory changes related to sister ANC-owned firms receiving follow-on, sole-source contracts and sister subsidiaries sharing primary NAICS codes. In addition to updating the guidance, SBA also provided training to its Alaska District Office staff on its 2011 regulations, specifically training on prohibitions against follow-on sole source contracts. SBA officials also told us in 2016 that staff in the Alaska District Office were provided training in supervisory review and other critical file management procedures, which we noted were weaknesses. SBA Has Not Yet Implemented Some Key Recommendations to Address Oversight and Monitoring Weaknesses To address the weaknesses described above, as well as others related to oversight and monitoring, our 2006, 2012, and 2016 reports contained a total of 21 recommendations to SBA. While SBA has fully implemented 15 of these recommendations, SBA has not implemented six recommendations—three of which we highlight in this statement. All six recommendations are important to enhancing SBA’s oversight of ANC-owned firms in the 8(a) program. We have not evaluated the operational effectiveness of SBA’s actions to implement the 15 recommendations, but if effectively implemented, those actions should help SBA improve its oversight and monitoring of ANC-owned firms in the 8(a) program. In response to our recommendations, SBA’s actions included providing training to its staff that emphasized regulations governing the requirement for procuring agencies to specifically state whether a contract is a follow-on contract in their offer letters, which could help reduce the award of a follow-on, sole-source contracts to sister ANC- owned firms; developing and enacting a regulation that gives SBA the authority, under certain circumstances, to change an ANC-owned firm’s primary line of business (expressed as a NAICS code) to the NAICS code that generates the greatest portion of the firm’s revenue; this action is intended to help SBA enforce rules preventing sister ANC-owned firms from operating in the same primary lines of business; and updating and providing written guidance to field staff officials on the enforcement of follow-on sole-source contract regulations. However, to date SBA has not provided us with evidence that it has implemented the three following recommendations, which if implemented as intended, could significantly improve its oversight of the 8(a) program. Absent action on these recommendations, SBA exposes the program to continued noncompliance. Tracking revenue data and other information on 8(a) ANC-owned firms: As previously discussed, SBA’s regulation prohibits ANCs from owning multiple firms that operate under the same primary line of business (expressed as a primary NAICS code). In each of our 2006, 2012, and 2016 reports we identified weaknesses in SBA’s ability to track this information in order to prevent sister ANC-owned firms from violating this rule or circumventing its intent. As a result, in 2006 we recommended that SBA track the primary revenue generators for ANC-owned firms and to ensure that multiple subsidiaries under one ANC are not generating their revenue in the same primary line of business, among other things. Similarly, in 2012 we recommended that, as SBA is developing a tracking system, it should take steps to ensure that the system tracks information on ANC-owned firms, including revenues and other information. In 2006 and 2012, SBA did not indicate whether it agreed with and intended to implement these recommendations. However, during our 2016 audit, SBA informed us that it had plans to address this issue, but could not provide any details. We therefore recommended in 2016 that SBA document this planned method for tracking revenue generated under subsidiaries’ primary and secondary lines of business. SBA agreed to implement this 2016 recommendation. As part of this recommendation, we stated that SBA’s documentation should include milestones and timelines for when and how the method will be implemented. We also recommended that SBA provide the appropriate level of access to and sharing of relevant subsidiary data across district offices, including primary and secondary lines of business and revenue data, once SBA develops a database with the capabilities of collecting and tracking these revenue data. In August 2018, SBA informed us that regulations promulgated in 2016 allow it to change an 8(a) ANC-owned firm’s primary line of business under certain circumstances if the greatest portion of the firm’s revenues evolved from one line of business to another. In our 2016 report, we concluded that the new regulations were a step in the right direction but would be difficult to implement effectively without the proper tracking and visibility of revenue data that we describe above and in our 2016 report. In 2018, SBA officials noted that they were testing an analytics tool that, they said, would allow them to track revenues for ANC-owned firms, as we recommended. SBA’s estimated completion date for the evaluation and implementation of this tool was December 31, 2018, but as of October 2019, SBA has not been able to provide documentation on whether this action has been implemented. We will continue to monitor SBA’s efforts to implement this recommendation. Criteria thresholds for contract modifications: As we reported in 2006, SBA regulation requires that when the contract execution function is delegated to the procuring agencies, these agencies must report to SBA certain 8(a) information, including contract modifications. Further, the agreements between SBA and the procuring agencies that we reviewed in 2006 require that the agencies provide SBA with copies of all 8(a) contract modifications within 15 days of the date of the contract award. However, in our 2006 report, we found that contracting officers were not consistently following these requirements. While some had notified SBA when incorporating additional services into the contract or when modifying the contract ceiling amount, others had not. Hence, we recommended that when revising relevant regulations and policies, the SBA Administrator should revisit the regulation that requires agencies to notify SBA of all contract modifications and consider establishing thresholds for notification. In 2006, SBA disagreed with this recommendation and thus had not revisited this regulatory requirement, but rather reiterated a preexisting requirement to provide all contract modifications, including administrative modifications, to SBA. We determined that this action did not fulfill our recommendation as it does not help to ensure that agencies are going to comply with the regulatory requirement. Small businesses potentially losing contracts to 8(a) ANC-owned firms: In our 2006 report, we found SBA’s oversight had fallen short in that it did not consistently determine whether other small businesses were losing contracting opportunities when large, sole-source contracts were awarded to ANC-owned firms. Further, we found cases where SBA did not take action when incumbent small businesses lost contract opportunities when ANC-owned firms were awarded a large sole-source contract. Hence, we recommended, that when revising relevant regulations and policies, the SBA Administrator should consistently determine whether other small 8(a) businesses are losing contracting opportunities when awarding contracts through the 8(a) program to ANC- owned firms. SBA did not agree with this recommendation, nor did it address the intent of this recommendation by developing a procedure to consistently perform this action. Instead, SBA reported to us that in 2009 it performed a single analysis of a limited set of procurement data from a limited period and concluded the data did not indicate that other small 8(a) firms (e.g., small businesses which are unconditionally owned and controlled by one or more socially and economically disadvantaged individuals, such black-owned and Hispanic-owned firms) were losing contracting opportunities to ANC-owned firms. We continue to believe that without a strategy for consistent monitoring of this issue, SBA is limited in determining the extent to which other small 8(a) businesses are being adversely impacted by contracts awarded to ANC-owned firms. In summary, the findings I have described in my statement today have persisted over time as SBA has struggled to articulate and execute an effective overall monitoring and oversight strategy. Implementing our remaining recommendations could help SBA address its monitoring and oversight control weaknesses in a comprehensive manner. Chairwoman Chu, Ranking Member Spano, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information regarding this testimony, please contact Seto J. Bagdoyan, (202) 512-6722 or bagdoyans@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are: Latesha Love (Assistant Director), Tatiana Winger (Assistant Director), Flavio Martinez (Analyst in Charge), Carla Craddock, April VanCleef, Tracy Abdo, Marcus Corbin, Colin Fallon, Julia Kennon, Barbara Lewis, Michele Mackin, Maria McMullen, James Murphy, Anna Maria Ortiz, William Shear, and Erin Villas. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Federal obligations under SBA's 8(a) Business Development Program totaled about $10.9 billion in fiscal year 2019, according to federal procurement data reported as of October 7, 2019. SBA's 8(a) program is one of the federal government's primary vehicles for developing socially and economically disadvantaged small businesses, including firms owned by ANCs. One of the key benefits of this program is the ability for ANC-owned firms to receive federal contract awards that have been set aside solely for 8(a) firms. From 2006 through 2016, GAO issued three reports detailing the limitations of SBA's oversight and monitoring of ANC-owned firms participating in the 8(a) program. GAO's testimony discusses the highlights of the aforementioned three reports and the extent to which SBA has addressed the recommendations GAO made in those reports, as of October 2019. GAO examined SBA files and other documents, conducted site visits, and interviewed program officials to perform the work of those reports. What GAO Found In three reports issued between 2006 and 2016, GAO has found persistent weaknesses in the Small Business Administration's (SBA) oversight and monitoring of Tribal 8(a) firms, in particular the Alaska Native Corporations' (ANC) subsidiary firms (ANC-owned firms) that participate in SBA's 8(a) program. Over the course of the program, qualified small, disadvantaged businesses, including ANC-owned firms, can receive federal contract awards that have been set aside solely for such businesses, and business development support from SBA, such as mentoring, financial assistance, and other management and technical assistance. In its three reports, among other things, GAO found that SBA had (1) incomplete information and documentation on ANC-owned firms and their compliance with regulatory requirements; (2) limitations in its ability to track and share key program data needed to enforce its own program; (3) insufficient staffing in its Alaska District Office to carry out necessary and critical monitoring tasks; and (4) inadequate or vague program guidance for clearly communicating to staff how to interpret new regulations. GAO made 21 recommendations to SBA that address weaknesses in SBA's oversight and monitoring of ANC-owned firms participating in the 8(a) program. SBA has taken steps to implement many of those recommendations, including enhancing training for SBA staff that emphasized program rules, and developing and implementing a regulation that helps SBA better enforce rules against ANC-owned firms obtaining contracts for which they were not necessarily eligible. However, SBA has not yet implemented recommendations that, if implemented as intended, could significantly improve its oversight of the 8(a) program. For example, SBA has not yet addressed limitations raised in GAO's 2006 and 2016 reports regarding SBA's tracking of revenue information for ANC-owned firms, which limits SBA's oversight of 8(a) rules prohibiting multiple subsidiaries under one ANC from generating revenue in the same primary line of business—which 8(a) program regulations intend to limit. SBA officials informed GAO of the agency's plans to develop an information system capable of addressing this issue. However, at the time of GAO's 2016 report, SBA could not provide detailed information or plans about this system, and as of today, the agency could not provide documentation that this system is operational. As another example, SBA has not addressed GAO's 2006 recommendation to consistently determine whether other small businesses are losing contracting opportunities when SBA awards contracts through the 8(a) program to ANC-owned firms, as required in regulation—an area where GAO found that SBA had fallen short in its oversight. Instead, in 2009, SBA reported that it performed a single analysis of a limited set of procurement data from a limited period and concluded the data did not indicate that other small 8(a) firms (e.g., black-owned, Hispanic-owned, and others) were losing contracting opportunities to ANC-owned firms. However, SBA's actions did not address the intent of GAO's recommendation to “consistently” perform this oversight. Absent action on these recommendations, the program continues to be at risk of noncompliance. What GAO Recommends GAO made multiple recommendations in its reports from 2006 through 2016, many of which SBA has taken steps to implement. However, SBA has not addressed key GAO recommendations, including tracking and sharing ANC-related information across SBA regional offices, considering the establishment of criteria thresholds for contract modifications, and developing policies to consistently assess whether other small businesses are losing 8(a) contracts to ANC-owned firms. GAO continues to believe that implementing these recommendations would enhance SBA's oversight and monitoring of firms in the 8(a) program.
gao_GAO-19-329
gao_GAO-19-329_0
Background The Marine Corps uses a fleet of 23 helicopters to support the President in the national capital region and when traveling in the continental United States and overseas. These aircraft have been in service for decades. In April 2002, the Navy began development of a replacement helicopter later identified as the VH-71 program. By 2009, schedule delays, performance issues, and a doubling of cost estimates, from $6.5 billion in 2005 to $13 billion in 2009, prompted the Navy to terminate the program. The need for a replacement helicopter remained, and by April 2012, the Office of the Secretary of Defense approved the Navy’s current acquisition approach. The Navy’s approach is based on the modification of an in-production aircraft to replace the legacy aircraft, by incorporating an executive cabin interior and unique mission equipment such as communications and mission systems, and limiting modifications to the aircraft to avoid a costly airworthiness recertification. In May 2014, the Navy awarded a fixed-price incentive (firm target) contract to Sikorsky Aircraft Corporation, a Lockheed Martin Company, for an Engineering and Manufacturing Development (EMD) phase. The contract includes options for production quantities. The VH-92A presidential helicopter is based on Sikorsky’s S-92A commercial helicopter. The fixed- price incentive contract includes a ceiling price of $1.3 billion that limits the maximum amount that the Navy may have to pay the contractor under the contract subject to other contract terms. The VH-92A is expected to provide improved performance, survivability, and communications capabilities, while offering increased passenger capacity when compared to the current helicopters. Sikorsky is taking S-92A aircraft from an active production line (at the Sikorsky plant in Coatesville, Pennsylvania) to a dedicated VH-92A modification facility for subsystem integration at its plant in Stratford, Connecticut. When the aircraft arrives from Coatesville, some components, such as circuit breaker panels, engines, and main and tail rotor blades are removed. After airframe modifications are done, the aircraft is then transferred to the Sikorsky facility in Owego, New York, where integration of the mission communications system, painting, and contractor-led testing, installation of the executive cabin interior, and the delivery of the aircraft will take place. See figure 1 for a depiction of modification of the commercial S-92A aircraft to the VH-92A presidential helicopter. The VH-92A development program includes delivery of two Engineering Development Model (EDM) test aircraft and four System Demonstration Test Article (SDTA) aircraft. The first flight of the first EDM aircraft took place in July 2017 and the second EDM aircraft’s first flight occurred in November 2017. The two EDM aircraft are currently undergoing government-led integrated testing, at Naval Air Station Patuxent River, Maryland, and were used to conduct an operational assessment in March 2019 to support a decision on whether to enter low-rate initial production. The four SDTA aircraft, now in the modification stages, are production representative aircraft being built under the development contract. These aircraft are to be used in the VH-92A’s initial operational test and evaluation, which is planned to begin in March 2020. The results of that testing will be used to inform a decision whether to enter full-rate production in 2021. These SDTA aircraft will be used to determine whether the VH-92A is operationally effective and suitable for its intended use. In July 2018, the Federal Aviation Administration certified the VH-92A EDM-1 aircraft and supporting documentation to allow delivery to the government under the contract. According to the program office, the first EDM VH-92A configured test aircraft arrived at Naval Air Station in Patuxent River, Maryland, to begin government-led performance testing. The program office explained that in December 2018, the contractor provided VH-92A EDM-2, the second development aircraft, to the Navy and it, too, is undergoing government testing. VH-92A Cost Estimates Are Decreasing While Program Manages Its Schedule and Performance Goals The VH-92A total program acquisition cost estimate has declined from $5.18 billion to $4.95 billion (then-year dollars)—since the program started in April 2014. Contractor officials attribute that the estimated decline in cost is due to stable requirements, a low number of design changes, and streamlined processes and reviews. The program has incurred delays of about 5 months to the start of its operational assessment due to parts shortages and early integration problems during product development. Program officials told us they have adjusted schedule milestones accordingly and now project that the VH-92A is on track to meet its key performance parameters, including providing a fully interoperable mission communications system (MCS) in time for initial operational test and evaluation in 2020. Cost Estimates Have Declined Due to Stable Requirements and Efficiency Gains The Navy continues to reduce its acquisition cost estimate for the VH-92A program. The total VH-92A program acquisition cost estimate has decreased $234 million or about 4.5 percent—from $5.18 billion to $4.95 billion (then-year dollars)—since the program started in April 2014. The total program acquisition unit costs have decreased by the same percentage. According to the program office, this decrease is comprised, in part, by reductions of approximately: $36 million for lower than expected inflation rates, $88 million for efficiencies gained during development, and $103 million for revised spare parts cost and equipment production list. A key factor in controlling total program acquisition cost has been performance requirements stability. The Navy has not added any key performance requirements to the fixed-price contract, thereby limiting cost growth. In addition, the Navy and the contractor have been able to limit the number of necessary design changes that require modifications to aircraft. These modifications are now being incorporated into the four production representative aircraft. The Navy is using an existing basic ordering agreement with Sikorsky, separate from the VH-92A contract, for two additional design changes that are not part of the baseline program. These changes are to allow for improved visibility from the aircraft’s forward door and the addition of a fifth multi-functional display in the cockpit (which is identical to the existing four displays) to improve situational awareness. The program office is working with the contractor to determine the best time to make these modifications to the aircraft in order to minimize the effect on the production schedule. The final costs are still being negotiated; however, the program office expects the cost of implementing these two engineering changes to be minimal relative to the program’s total acquisition cost. The Navy and contractor have also taken advantage of other cost saving measures including streamlining some work processes and revised testing approach for some components; they are also sharing secure facilities used in support of the current presidential helicopter. In addition, they eliminated activities deemed redundant to the Federal Aviation Administration VH-92A airworthiness certification and plan to use a streamlined reporting process for the March 2019 operational assessment. According to program officials, the VH-92A has also optimized its live fire test and evaluation program. The Program Is Operating within Its Original Approved Schedule Baseline, Despite Experiencing Some Delays in Development Overall, Sikorsky reported it had accomplished about 83.3 percent of development work, with the remainder to be completed by October 2020. As of February 2019, the contractor estimates it would have completed nearly all of its activities necessary to demonstrate performance specification compliance per the contract, by February 2019, and the Navy is now more than halfway through its ground and flight testing requirements needed to a support Milestone C, the decision point for entering into low-rate initial production. The program has addressed delays resulting from technical challenges and new discoveries during development by delaying the start dates for the operational assessment, the low-rate initial production decision, and initial operational test and evaluation by 5 months each. The milestone start dates still meet the baseline schedule thresholds. As we found in the past, part shortages and the integration and assembly effort taking longer than planned have all contributed to delays early in the development of the two engineering development model aircraft. The overall effect has been between 3 and 5 months of schedule delays. In addition, some work initially allocated to the contractor’s site will now be completed at the Naval Air Station, Patuxent River, Maryland. This is a result of the contractor’s inability to get some parts when needed to maintain the planned build schedule. According to the program office, the Navy has implemented a number of mitigation strategies to reduce the effect of the schedule slip, including leasing a commercial S-92A for pilot training, reducing the duration of some future activities, adjusting the program’s schedule, and reexamining and optimizing some work processes to maintain the approved program baseline schedule. We also found that the program’s integrated master schedule met the best practices for a reliable schedule compared against best practices criteria in the GAO Schedule Assessment Guide. The success of programs depend, in part, on having an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others. Such a schedule is necessary for government acquisition programs for many reasons. It provides not only a road map for systematic project execution but also the means by which to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the program. An IMS provides a time sequence for the duration of a program’s activities and helps everyone understand both the dates for major milestones and the activities that drive the schedule. A program’s IMS is also a vehicle for developing a time-phased budget baseline. Moreover, it is an essential basis for managing tradeoffs between cost, schedule, and scope. Among other things, scheduling allows program management to decide between possible sequences of activities, determine the flexibility of the schedule according to available resources, predict the consequences of managerial action or inaction on events, and allocate contingency plans to mitigate risks. Our research has identified 10 best practices associated with effective schedule estimating that can be collapsed into 4 general characteristics (comprehensive, well-constructed, credible, and controlled) for sound schedule estimating. Overall, we found the program’s IMS fully met one and substantially met three of the four characteristics for sound schedule estimating. Table 2 provides a comparison of the planned timeframe for key events at development start to the current estimated schedule. The Navy’s operational assessment began in March 2019 and ended about 30 days later; this is nearly 2 months prior to the Milestone C review, which will authorize low-rate initial production. The contractor’s delivery of the first engineering development model aircraft to the government was about a month late. A Developmental Test and Evaluation official stated that this reduced the already short window of time between the end of development testing and start of the operational assessment. A Director, Operational Test and Evaluation official responsible for monitoring the program expressed concern that there is little time to address any new discoveries found during the operational assessment. The program office acknowledged that, while solutions to any newly discovered problems may not be ready to implement at the start of production, it expects to have enough information from government-led integrated testing and the operational assessment to move forward with the Milestone C decision. The Program Made Progress in Demonstrating Performance Goals through Planned Developmental Testing According to the contractor, by February 2019, its test program for the first two development aircraft will be nearly completed. In addition, as of December 2018, the government completed about 48 percent of its development ground and flight test points to support Milestone C but is slightly behind, as it had planned to complete about 57 percent at this time. Between August and December 2018, the program conducted three major test events—the Navy conducted 14 landings on the White House south lawn to assess approaches, departures, and operations in the landing zone. The Navy also installed MCS version 2.0 on the second EDM aircraft in support of the operational assessment and tested the ability to transport the VH-92A in a cargo plane. Figure 2 shows the status of government testing as of January 2019. Program Facing Development Challenges While the program has made progress, the VH-92A program continues to face development challenges that could affect Sikorsky’s ability to deliver fully capable aircraft prior to the start of initial operational test and evaluation. Those challenges include issues associated with the aircraft’s start procedures for the propulsion system, landing zone suitability, and the aircraft’s mission communications system interoperability with secure networks. According to the program office, the performance requirements associated with these challenges may not be fully achieved until after the low-rate initial production decision currently planned for June 2019, which may result in a need to retrofit already built aircraft. Below is additional information on each of those performance requirements. VH-92A aircraft start procedures: As we reported last year, the VH- 92A was pursuing technical improvements related to the S-92A propulsion system, which was not meeting a performance requirement. According to program officials, a previously identified solution is no longer being pursued. However, these officials stated that the program is continuing to assess current capabilities and both material and non-material solutions to any potential capability shortfalls. Testing to demonstrate aircraft performance against the requirement will be completed prior to the Milestone C review in June 2019. Design changes, if needed, will be coordinated with program stakeholders. Program risk for this performance requirement has not changed since our April 2018 report on the program. Landing zone suitability: The VH-92A operates in and out of a variety of restrictive and highly visible landing zones. The White House South Lawn is one of the most frequent locations utilized for helicopter operations in support of the President. As we reported last year, the program was not meeting a key system capability requirement to land the aircraft without adversely affecting landing zones (including the White House South Lawn). The program has still not fully met this requirement and its assessment of this risk has increased since our last report. According to program officials, Sikorsky expects to have a solution for this requirement by November 2020. Mission Communications System (MCS): The mission communications system is a subsystem of the VH-92A aircraft that provides on-board and off-board communications services for the pilots, passengers, and crew. Currently, the VH-92A program has experienced problems connecting the MCS to secure networks, presenting a new risk area for the program. According to program officials, the MCS cannot connect to required secure networks due to recent changes in security protocols. Design changes will be needed to permanently correct this problem. For the March 2019 operational assessment, the program plans to connect to existing networks that do not use the new security protocols. This allowed the operational assessment to proceed but will limit the scope of testing. The Navy plans to have a final fix by January 2020 that will then be incorporated into the four production representative helicopters built under the development contract. These changes have caused the Navy to delay the start of the VH-92 initial operational test and evaluation by 3 months, a delay that is still within the approved program baseline threshold, as discussed earlier. Agency Comments We provided a draft of this report to DOD for review and comment. DOD provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense and the Secretary of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix l. Appendix l: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contract above, Bruce H. Thomas, Assistant Director; Marvin E. Bonner; Bonita J. P. Oden: Peter Anderson, Juana S. Collymore, Danny C. Royer, and Marie Ahearn made key contributions to this report. Related GAO Products Presidential Helicopter: VH-92A Program Is Stable and Making Progress While Facing Challenges. GAO-18-359. Washington, D.C.: April 30, 2018. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-17-333SP. Washington, D.C.: March 30, 2017. Presidential Helicopter: Program Progressing Largely as Planned. GAO-16-395. Washington, D.C.: April 14, 2016. Presidential Helicopter Acquisition: Program Established Knowledge- Based Business Case and Entered System Development with Plans for Managing Challenges. GAO-15-392R.Washington, D.C.: April 14, 2015. Presidential Helicopter Acquisition: Update on Program’s Progress toward Development Start. GAO-14-358R. Washington, D.C.: April 10, 2014. Department of Defense’s Waiver of Competitive Prototyping Requirement for the VXX Presidential Helicopter Replacement Program. GAO-13-826R.Washington, D.C.: September 6, 2013. Presidential Helicopter Acquisition: Program Makes Progress in Balancing Requirements, Costs, and Schedule. GAO-13-257. Washington, D.C.: April 9, 2013. Presidential Helicopter Acquisition: Effort Delayed as DOD Adopts New Approach to Balance Requirements, Costs, and Schedule. GAO-12-381R. Washington, D.C.: February 27, 2012. Defense Acquisitions: Application of Lessons Learned and Best Practices in the Presidential Helicopter Program. GAO-11-380R. Washington, D.C.: March 25, 2011.
Why GAO Did This Study The mission of the presidential helicopter fleet is to provide safe, reliable, and timely transportation in support of the President. The Navy plans to acquire a fleet of 23 VH-92A helicopters to replace the current Marine Corps fleet of VH-3D and VH-60N aircraft. Initial delivery of VH-92A presidential helicopters is scheduled to begin in fiscal year 2020 with production ending in fiscal year 2023. The total cost of this acquisition program was originally estimated at almost $5.2 billion. The National Defense Authorization Act of 2014 included a provision for GAO to report on the VH-92A program annually, until the Navy awards the full-rate production contract. This report discusses (1) the extent to which the program is meeting its cost and schedule goals and (2) challenges facing the program in system development. To determine how the program is progressing, GAO analyzed program documents; and spoke with officials from the program office, the Defense Contract Management Agency, contractors, Director, Operational Test and Evaluation, and Department of Defense, Developmental Test and Evaluation. GAO also assessed the program's integrated master schedule against GAO best practices. What GAO Found Acquisition cost estimates for the Presidential Helicopter Replacement Program (also known as the VH-92A) have declined from $5.18 billion to $4.95 billion, for 23 new helicopters, since the program started in April 2014 (see table), and the program remains within its planned schedule. The contractor attributes this cost decrease to several factors: stable requirements, a low number of design changes, and program efficiencies. The program has delayed some program milestones—for example, its low-rate production decision—by 5 months from its original baseline goal. Although this remains within the approved schedule, the program will have less time than planned between the end of development testing and start of operational assessment. Program officials told GAO they expect to have enough information from both the government-led integrated testing and the operational assessment to inform the low-rate production decision. Continuing development challenges concerning performance requirements may affect whether the program can deliver fully capable aircraft on time in the future. These include: VH-92A start procedures: As we reported last year, the VH-92A was pursuing technical improvements related to Sikorsky's S-92A propulsion system, which has yet to meet a VH-92A performance requirement. Program risk for this performance requirement has not changed since our April 2018 report on the program. Landing zone suitability: As GAO found in 2018, the program has not yet met a key system capability requirement for landing the helicopter without damaging the landing zone—for example, the White House South Lawn. According to program officials, Sikorsky plans to have a solution for this performance requirement by November 2020 . Mission communications system: The VH-92A program has experienced problems connecting the aircraft's communication system to secure networks, due to changes in network security requirements, presenting a new risk area for the program. The Navy anticipates having a fix by January 2020. These changes are expected to be incorporated into the four production representative helicopters being built under the development contract in time for the program's initial operational test and evaluation. What GAO Recommends GAO is not making any recommendations in this report, but will continue to monitor the potential cost growth and schedule delays as the program responds to challenges meeting capability requirements.
gao_GAO-20-299
gao_GAO-20-299_0
Background Our nation’s critical infrastructure refers to the systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of them would have a debilitating impact on our security, economic stability, public health or safety, or any combination of these factors. Critical infrastructure includes, among other things, banking and financial institutions, telecommunications networks, and energy production and transmission facilities, most of which are owned and operated by the private sector. Threats to the systems supporting our nation’s critical infrastructures are evolving and growing. These systems are susceptible to unintentional and intentional threats, both cyber and physical. Unintentional, or nonadversarial, threat sources include equipment failures, software coding errors, or the accidental actions of employees. They also include natural disasters and the failure of other critical infrastructures, since the sectors are often interdependent. Intentional or adversarial threats can involve targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, and disgruntled employees. Adversaries can leverage common computer software programs to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. Due to the cyber-based threats to federal systems and critical infrastructure, the persistent nature of information security vulnerabilities, and the associated risks, GAO first designated federal information security as a government-wide high-risk area in our biennial report to Congress in 1997. In 2003, we expanded this high-risk area to include the protection of critical cyber infrastructure and, in 2015, we further expanded this area to include protecting the privacy of personally identifiable information. We continue to identify the protection of critical cyber infrastructure as a high-risk area, as shown in our March 2019 high- risk update. Federal Law and Policy Assign Responsibilities for the Protection of Critical Infrastructure Sectors Because the private sector owns the majority of the nation’s critical infrastructure, it is vital that the public and private sectors work together to protect these assets and systems. Toward this end, federal law and policy assign roles and responsibilities for agencies to assist the private sector in protecting critical infrastructure, including enhancing cybersecurity. Presidential Policy Directive 21 establishes the SSAs in the public sector as the federal entities responsible for providing institutional knowledge and specialized expertise. The SSAs lead, facilitate, and support the security and resilience programs and associated activities of their designated critical infrastructure sectors. The directive identified 16 critical infrastructure sectors and designated the nine associated SSAs, as shown in figure 1. In addition, the directive required DHS to update the National Infrastructure Protection Plan to address the implementation of the directive. The directive called for the plan to include, among other things, the identification of a risk management framework to be used to strengthen the security and resilience of critical infrastructure and a metrics and analysis process to be used to measure the nation’s ability to manage and reduce risks to critical infrastructure. DHS, in response, updated the National Infrastructure Protection Plan in December 2013 in collaboration with public- and private-sector owners and operators and federal and nonfederal government representatives, including SSAs, from the critical infrastructure community. According to the 2013 plan, SSAs are to work with their private-sector counterparts to understand cyber risk and they are to develop and use metrics to evaluate the effectiveness of risk management efforts. To work with the government, the SCCs were formed as self-organized, self-governing councils that enable critical infrastructure owners and operators, their trade associations, and other industry representatives to interact on a wide range of sector-specific strategies, policies, and activities. The SSAs and the SCCs coordinate and collaborate in a voluntary fashion on issues pertaining to their respective critical infrastructure sector. In addition to the directive, federal laws and policies have also established roles and responsibilities for federal agencies to work with industry to enhance the cybersecurity of the nation’s critical infrastructures. These include the Cybersecurity Enhancement Act of 2014 and Executive Order 13636. In February 2013, Executive Order 13636 outlined an action plan for improving critical infrastructure cybersecurity. Among other things, the executive order directed NIST to lead the development of a flexible performance-based cybersecurity framework that was to include a set of standards, procedures, and processes. The executive order also directed SSAs, in consultation with DHS and other interested agencies, to coordinate with the SCCs to review the cybersecurity framework and, if necessary, develop implementation guidance or supplemental materials to address sector-specific risks and operating environments. Further, in December 2014, the Cybersecurity Enhancement Act of 2014 established requirements that are consistent with the executive order regarding NIST’s development of a cybersecurity framework. According to this law, NIST’s responsibilities in supporting the ongoing development of the cybersecurity framework included, among other things, identifying an approach that is flexible, repeatable, performance-based, and cost- effective. Additionally, the Cybersecurity Act requires NIST to coordinate with federal and nonfederal entities (e.g., SSAs, SCCs, and ISACs) to identify a prioritized, performance-based approach to include information security measures to help entities assess risk. In May 2017, Executive Order 13800 directed federal agency heads to use the framework to manage cybersecurity risks. The executive order also required them to provide a risk management report to DHS and the Office of Management and Budget within 90 days of the date of the executive order. The risk management report calls for agencies to document the risk mitigation and acceptance choices including, for example, describing the agency’s action plan to implement the framework. NIST Established a Framework for Improving Critical Infrastructure Cybersecurity In response to Executive Order 13636, NIST published, in February 2014, the Framework for Improving Critical Infrastructure Cybersecurity, a voluntary framework of cybersecurity standards and procedures for industry to adopt. According to NIST, as of February 2019, the framework had been downloaded more than a half million times since its initial publication in 2014. Additionally, it has been translated into Arabic, Japanese, Portuguese, and Spanish, and has been adopted by many foreign governments. The framework is composed of three main components: the framework core, the implementation tiers, and the profiles. The framework core provides a set of activities to achieve specific cybersecurity outcomes and references examples of guidance to achieve those outcomes. Through the use of the profile, the framework is intended to help organizations align their cybersecurity activities with business requirements, risk tolerances, and resources. The framework core is divided into four elements: functions, categories, subcategories, and informative references. Functions consist of five elements—(1) identify, (2) protect, (3) detect, (4) respond, and (5) recover. When considered together, these functions provide a strategic view of the life cycle of an organization’s management of cybersecurity risk. Categories are the subdivisions of a function into groups of cybersecurity outcomes tied to programmatic needs and particular activities (i.e. asset management). Subcategories further divide a category into specific outcomes of technical and/or management activities (i.e. notifications from detection systems are investigated). Lastly, informative references are specific sections of standards, guidelines, and practices that illustrate a method to achieve the outcomes described and support one or more informative references (i.e. NIST Special Publication (SP) 800-53A). Implementation tiers characterize an organization’s approach to managing cybersecurity risks over a range of four tiers. The four tiers are partial, risk informed, repeatable, and adaptive. They reflect a progression from informal, reactive responses to approaches that are flexible and risk- informed. Profiles enable organizations to establish a road map for reducing cybersecurity risks that is well aligned with organizational and sector goals, consider legal/regulatory requirements and industry best practices, and reflect risk management priorities. Organizations can use the framework profiles to describe the current state (the cybersecurity outcomes that are currently being achieved) or the desired target state (the outcomes needed to achieve the desired cybersecurity risk management goals) of specific cybersecurity activities. GAO Has Previously Reported on the Development, Promotion, and Adoption of the Cybersecurity Framework In December 2015, we issued our first report on the development and promotion of the framework in response to the 2014 Cybersecurity Act. We reported that the framework met the requirements established in federal law that it be flexible, repeatable, performance-based, and cost- effective. We also reported that SSAs and NIST had promoted and supported adoption of the cybersecurity framework in the critical infrastructure sectors. For example, we reported that DHS had established the Critical Infrastructure Cyber Community Voluntary Program to encourage adoption of the framework and had undertaken multiple efforts as part of this program. These efforts included developing guidance and tools intended to help sector entities that use the framework. However, we noted that DHS had not developed metrics to measure the success of its activities and programs. Accordingly, we concluded that DHS could not determine if its efforts were effective in encouraging adoption of the framework. We recommended that the department develop metrics to assess the effectiveness of its framework promotion efforts. DHS agreed with the recommendation and subsequently took actions to implement it. We also reported in December 2015 that SSAs had promoted the framework in their sectors by, for example, presenting the framework at meetings of sector stakeholders and holding other promotional events. In addition, all of the SSAs, except for DHS and the General Services Administration (GSA), as co-SSAs for the government facilities sector, made decisions, as required by Executive Order 13636, on whether to develop tailored framework implementation guidance for their sectors. However, we noted that DHS and GSA had not set a time frame to determine, as required by Executive Order 13636, whether sector-specific implementation guidance was needed for the government facilities sector. We concluded that, by not doing so, DHS and GSA could be hindering the adoption of the framework in this sector. As a result, we recommended that DHS and GSA set a time frame to determine whether implementation guidance was needed for the government facilities sector. Both DHS and GSA agreed with our recommendations and subsequently took actions to implement them. More recently, in February 2018, we issued our second report on the adoption of the framework. We reported that most of the 16 critical infrastructure sectors had taken action to facilitate adoption of the framework by entities within their sectors. We also reported that 12 of the 16 critical infrastructure sectors had taken actions to review the framework and, if necessary, develop implementation guidance or supplemental materials that addressed how entities within their respective sectors can adopt the framework. We also reported that none of the SSAs had measured the cybersecurity framework’s implementation by entities within their 16 respective sectors. We noted that the nation’s plan for national critical infrastructure protection efforts stated that federal and nonfederal sector partners (including SSAs) were to measure the effectiveness of risk management goals by identifying high-level outcomes and progress made toward national goals and priorities, including securing critical infrastructure against cyber threats. However, we reported that none of the 16 coordinating councils reported having qualitative or quantitative measures of framework adoption because they generally did not collect specific information from entities about critical infrastructure protection activities. Most SSAs Have Not Developed Methods to Determine Framework Adoption As of November 2019, most of the SSAs had not developed methods to determine their level and type of cybersecurity framework adoption, as we previously recommended. The SSAs and SCCs identified a number of impediments to developing a comprehensive understanding of the use of the framework, including the voluntary nature of the framework. However, most SSAs have taken steps to encourage and facilitate use of the framework. Further, the 12 selected organizations we interviewed reported either fully or partially using the cybersecurity framework. Most Sector-Specific Agencies Had Not Determined the Level and Type of Framework Adoption Best practices identified in the National Infrastructure Protection Plan recommend that entities, such as SSAs and SCCs, take steps to evaluate progress toward achieving their goals—in this case, to implement or adopt the cybersecurity framework. As we previously reported, until the SSAs had a more comprehensive understanding of the use of the cybersecurity framework by entities within the critical infrastructure sectors, they would be limited in their ability to understand the success of protection efforts or to determine where to focus limited resources for cyber risk mitigation. As a result, we recommended that the SSAs take steps to consult with respective sector partner(s), such as the SCCs, DHS, and NIST, as appropriate, to develop methods for determining the level and type of framework adoption by the entities across their respective sectors. However, as of November 2019, most of the SSAs had not developed methods to determine the level and type of framework adoption. Specifically, only two of the nine SSAs—the Department of Defense (DOD) in collaboration with the defense industrial base sector and GSA in conjunction with DHS’s Federal Protective Service—had methods to determine the level and type of framework adoption across their respective sectors. DOD, in coordination with the defense industrial base sector, had developed a process to monitor the level or extent to which all contracts (not including commercial off-the-shelf contracts) were or were not adhering to the cybersecurity requirements in DOD acquisition regulations. The regulations called for organizations to implement the security requirements in NIST SP 800-171, which is mapped to the functional areas of the cybersecurity framework. By doing so, DOD is able to determine the level at which the sector organizations are implementing the framework and the type of framework adoption through mapping to the functional areas. Additionally, the federal departments and agencies that form the government facilities sector had submitted their risk management reports to DHS and OMB that described agencies’ action plans to implement the framework, as required under Executive Order 13800. The risk management assessments are included as part of OMB’s FISMA Annual Report to Congress. As a result, the reports could be used as a resource to inform the level and type of framework adoption. In addition, two other SSAs had begun taking steps to develop methods to determine the level and type of framework adoption in their sectors. Specifically, in October 2019, DHS, in coordination with its information technology (IT) sector partner, administered a survey to all small and midsized IT sector organizations to gather information on, among other things, framework use and plans to report on the results in 2020. Further, officials in the Department of Transportation’s (DOT) Office of Intelligence, Security, and Emergency Response, in coordination with its co-SSA (DHS), told us that they planned to develop and distribute a survey to the transportation systems sector to determine the level and type of framework adoption. DOT officials stated that the draft survey was undergoing DHS legal review and that the completion of the review and subsequent OMB review would determine when the survey is approved for distribution. The remaining five SSAs did not have efforts underway to determine the level and type of framework adoption: Department of Agriculture, Department of Energy, Department of Health and Human Services (HHS), Environmental Protection Agency (EPA), and Department of the Treasury. These SSAs identified impediments to determining framework adoption but also noted steps taken to encourage use of the framework within their respective sector. Department of Agriculture’s Office of Homeland Security officials stated that their sector is diverse and includes over 500 sector members that can range from small farms that are family operated to large corporations that deal with selling food wholesale. The officials noted that the diversity makes it difficult to develop a method for determining the level and type of framework adoption across the sector that would apply to all their members. The framework, however, is adaptive to provide a flexible and risk- based implementation. Accordingly, the framework can be used with a broad array of cybersecurity risk management processes. Agriculture officials added that the SCC frequently invites DHS to semi-annual meetings to present on both the threat to cybersecurity and resources available to support the needs of the sector. Department of Energy’s Office of Cybersecurity, Energy Security, and Emergency Response officials stated that the voluntary nature of the framework made it difficult to determine the level and type of framework adoption. However, the department published the Cybersecurity Capability Maturity Model in May 2012, with the most recent update (version 1.1) published in February 2014. The model focused on the implementation and management of cybersecurity practices, and was intended to be descriptive, rather than prescriptive, guidance that could be used by organizations of various types and sizes to strengthen their cybersecurity capabilities. The model was designed for organizations to use with a self-evaluation methodology and toolkit to measure and improve their cybersecurity programs and serve as an example for how to implement the framework. In February 2020, officials stated that they were in the process of updating the model and will update the framework implementation guidance once the model has been updated. HHS’s Assistant Secretary for Preparedness and Response (ASPR) officials stated that, since the use of the framework by the private sector is voluntary, organizations were free to choose any cybersecurity framework(s) that they believed to be most effective for their particular environment. However, HHS, in collaboration with NIST, DHS, and the Joint Healthcare and Public Health Cybersecurity Working Group, released a cybersecurity publication (Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients) that contained 10 best practices in December 2018 for the healthcare and public health services sector based on the framework. This publication allowed stakeholders to identify how to use the framework with existing sector resources by raising awareness and providing vetted cybersecurity practices to enable the organizations to mitigate cybersecurity threats to the sector. In addition, officials from HHS’s ASPR stated that the working group discussed the challenges associated with measuring the use and impact of the NIST framework, and approved the establishment of a task group in 2020 to further investigate the issue. ASPR officials added that some of the ideas discussed included the use of surveys and identification of a set of voluntary reporting indicators. EPA officials told us that the agency will coordinate with its SCC to identify appropriate means to collect and report information, such as a survey, to determine the level and type of framework adoption. They explained that, in the past, the water sector had expressed concerns with sharing sensitive cybersecurity information and in developing metrics to evaluate cybersecurity practices. However, EPA officials stated that they have conducted training, webcasts, and outreach related to cybersecurity, including using the framework and tailoring its efforts to sector needs. According to EPA officials, the agency’s goal in doing so was to ensure that sector organizations understood the importance of the framework. Department of the Treasury officials noted the size of the financial services sector as an impediment to determine framework adoption. Specifically, officials stated that, because of the large number of members, it is difficult to survey all 800,000 organizations to determine framework adoption. However, officials stated that the department, in coordination with the Financial and Banking Information Infrastructure Committee, and in consultation with NIST, developed the Cybersecurity Lexicon in March 2018. The lexicon addressed, among other things, common terminology for cyber terms used in the framework. Additionally, the financial services sector, in consultation with NIST, created the Financial Services Sector Cybersecurity Profile (profile) in October 2018, which mapped the framework core to existing regulations and guidance, such as the Commodity Futures Trading Commission System Safeguards Testing Requirements. Officials stated that these efforts will facilitate the use of the framework. While the five SSAs have ongoing initiatives, implementing our recommendations to gain a more comprehensive understanding of the framework’s use by critical infrastructure sectors is essential to the success of protection efforts. Most SSAs Have Taken Steps to Facilitate Use of the Framework Executive Order 13636 directs SSAs, in consultation with DHS and other agencies, to review the cybersecurity framework and, if necessary, develop implementation guidance or supplemental materials to address sector-specific risks and facilitate framework use. Most of the SSAs developed guidance to encourage and facilitate use of the framework. Specifically, SSAs for 13 of the 16 sectors had developed implementation guidance that included mapping the existing sector cybersecurity tools, standards, and approaches to the framework. For example, the implementation guidance for the healthcare and public health sector provides instruction on how to align a host of existing voluntary or required standards (such as those promulgated pursuant to the Health Insurance Portability and Accountability Act of 1996), guidelines, and practices to the framework core functions. Table 1 describes the 13 sectors and the associated cybersecurity framework implementation guidance. The Cybersecurity Capability Maturity Model helps organizations evaluate and potentially improve their cybersecurity practices. Appendix A of the Energy Sector Cybersecurity Framework Implementation Guidance provides a mapping of the model to the framework. The Financial Services Sector Cybersecurity Profile was created for financial institutions of all sizes to use for cyber risk management assessment and a mechanism to comply with various regulatory frameworks and the NIST Cybersecurity Framework. The remaining three sectors (government facilities, food and agriculture, and IT) had not developed implementation guidance. In this regard, DHS’s Federal Protective Service officials stated that, in 2015, the co- SSAs of the government facilities sector (DHS and GSA) decided that implementation guidance was not needed based on a consensus within the government facilities sector. DHS’s Federal Protective Service officials added that this decision was reevaluated in 2017 and they determined that the guide was still not needed. Department of Agriculture officials from the Office of Homeland Security stated that the co-SSAs (Agriculture and HHS) and the SCC for the sector collectively decided that a single implementation guidance document was not sufficient for addressing the needs of the diverse membership of the food and agriculture sector and that the creation of such a document was a low priority for the sector. These officials added that, due to the complexity of operations and large number of entities within the sector, the coordinating councils determined that it was more appropriate to refer sector members to DHS's Critical Infrastructure Cyber Community Voluntary Program. DHS officials representing the SSA for the IT sector stated that the SSA and SCC jointly determined that creating formal implementation guidance within the sector was not necessary. They added that the IT sector continued to play an active role by participating in framework development and promotion across the sectors, to include the development of a small and midsize business cybersecurity survey that was issued in 2019. In addition to the above efforts, NIST officials stated that they took steps to encourage framework adoption through three main mechanisms for federal and nonfederal entities and organizations that were interested in the framework: (1) conferences and speaking engagements, (2) requests for information to solicit ways in which organizations are using the framework to improve cybersecurity risk management and how best practices are being shared, and (3) industry and agency events, such as webcasts. Selected Organizations Described Varying Levels of Use of the Framework The 12 selected organizations reported either fully or partially using the cybersecurity framework. Specifically, six organizations reported fully using the framework, whereas six others reported partially using the framework. For example, one organization that reported fully using the framework stated that the framework core, profiles, and tiers were implemented across all the components or business units in the organization. In contrast, one organization that reported partially using the framework stated that it used the framework profiles, but did not fully use the framework core and tiers. Two other of the organizations that reported partially using the framework stated that they considered themselves to be using the framework since they use International Organization for Standardization (ISO) 27001, an international standard that has elements that overlap with those in the framework. Selected Organizations Reported Improvements but SSAs Have Not Collected and Reported Sector- Wide Improvements Resulting from Framework Use The 12 selected organizations using the framework reported varying levels of improvements. Such improvements included identifying risks and implementing common standards and guidelines. However, the SSAs have not collected and reported sector-wide improvements as a result of framework use. The SSAs, SCCs, ISACs, and the selected organizations identified impediments to collecting and reporting such improvements, including developing precise measurements of improvement, the voluntary nature of the framework, and lack of a centralized information sharing mechanism. NIST and DHS have identified initiatives to help address these impediments. Selected Organizations Described Varying Levels of Improvements from Using the Framework The 12 selected organizations reported varying levels of improvements as a result of using the framework. Specifically, four of the 12 reported great improvement, six reported some improvement, and two reported little improvement. Examples of each category are described below: Great improvement: One organization stated that the framework allowed it to determine the current state (the cybersecurity outcomes that are currently being achieved) and the desired target state (the outcomes needed to achieve the desired cybersecurity risk management goals). The organization stated that identifying the current and target states enabled the organization to identify risks and implement common policies, standards, and guidelines across their organization. Officials of the organization also stated that the common language provided by the framework made it easier to communicate within the organization when discussing budgets for cybersecurity that resulted in budget increases. Some improvement: One organization explained that the framework is accepted across organizations and that modeling its capabilities against the framework provided assurance that it covered the critical aspects of security. However, the organization noted that, if the framework did not exist, it would have used another framework to protect its critical infrastructure and facilitate decision making. Little improvement: One organization noted that it already had a very robust risk management process through the use of international standards before using the framework. As a result, the organization stated that use of the framework resulted in little improvements. Another organization that reported little improvements stated that use of the framework helped the organization, but there were no specific improvements that it could identify in protecting its critical infrastructure as a result of using the framework. Initiatives Available to Help Address Impediments to Collecting and Reporting on Sector-Wide Improvements NIST Special Publication 800-55 guidance on performance measurement states that agency heads are responsible for actively demonstrating support for developing information security measures and facilitating performance improvements in their information security programs, which is to include a periodic analysis of data to determine lessons learned. Additionally, the National Infrastructure Protection Plan directed SSAs and their federal and nonfederal sector partners (including SCCs) to measure the effectiveness of risk management goals by identifying high- level outcomes to facilitate the evaluation of progress toward national goals and priorities, including securing critical infrastructure from cybersecurity threats. The SSAs are not collecting and reporting on improvements in the protection of critical infrastructure as a result of using the framework across the sectors. The SSAs, SCCs, ISACs, and organizations reported a number of impediments to identifying sector-wide improvements, including developing precise measurements of improvement, the voluntary nature of the framework, difficulty in measuring the direct impact of using the framework, lack of use cases, and lack of a centralized information sharing mechanism. Figure 2 depicts the number of entities and organizations that identified these five impediments, and is followed by a discussion of each challenge. Two SCCs, two ISACs, and two organizations identified the difficulty of having precise measurements of improvements as a result of using the framework. SCC officials from the communications and healthcare and public health sectors stated that authoritative and precise measurements of improvements are difficult to determine in a consistent and non-subjective manner. For example, the SCC officials for the healthcare and public health sector stated that they were not aware of a direct or precise form of sector-wide measurements to define success in mitigating cybersecurity risk using the framework within the sector. These officials added that future efforts could include methodologies to track sector-wide improvements based on the framework structure or other cybersecurity guidance. However, officials from NIST’s Information Technology Laboratory stated that they were in the early stages of initiating an information security measurement program to facilitate identifying improvements sector-wide. Officials stated that the program aims to provide foundation tools and guidance to support the development of information security measures that are aligned with an individual organization’s objectives. The officials stated that they had not established a time frame for the completion of the measurement program. They added that, once the program is developed, the SSAs are expected to be able to customize the program and work with their respective sector organizations to determine sector-wide improvements based on their unique objectives. Eight SSAs, two SCCs, and four organizations stated that the voluntary nature of using the framework made it difficult to identify sector-wide improvements. Officials stated that private sector framework adoption was voluntary and, therefore, there were no specific reporting requirements to provide information on improvements. For example, DOT officials from the Office of Intelligence, Security, and Emergency Response stated that, while the department and its co-SSA (DHS) intended to develop a survey to determine sector-wide improvements, consolidating voluntarily shared information will not reflect the depth and breadth of sector stakeholders, as organizations that share information will not collectively represent a sector. In April 2019, NIST issued the NIST Roadmap for Improving Critical Infrastructure Cybersecurity, version 1.1, which included a self- assessment tool that provided a mechanism for individual organizations to self-assess how effectively they manage cybersecurity risks in the context of broader enterprise risk management activities and identify improvement opportunities. In addition to the road map, NIST’s framework included a section that encouraged organizations to incorporate measurements of their risks, which can be used to identify sector-wide improvements related to using the framework. In addition, as previously mentioned, DHS, in partnership with its IT sector partners, administered a survey to the small and mid-sized IT sector organizations to gather information on, among other things, framework adoption, challenges, and related improvements. While DHS did not plan to report on the results until 2020, the survey was intended to help the department in identifying improvements across the small and mid-sized IT sector organizations. The survey was administered to the small and mid-sized organizations within the IT sector. DHS officials stated that any small or mid-sized business across all critical infrastructure sectors could complete the survey and that the department had promoted the survey to all sectors. Moreover, among all 16 sectors, only DOT and its co-SSA (DHS) had considered the applicability of a similar approach for their sector organizations. Specifically, DOT, in conjunction with DHS, plans to distribute a survey intended to cover framework adoption, challenges, and related improvements across the sector. DOT officials stated that the survey completion is contingent upon DHS’s Transportation Security Administration’s coordination of the review and approval process to meet Paperwork Reduction Act compliance requirements. Three SSAs, four SCCs, one ISAC, and seven organizations stated that identifying sector-wide improvements as a result of using the framework was difficult due to organizations struggling with determining the direct impact from framework use. For example, the Department of Energy officials from the Office of Cybersecurity, Energy Security, and Emergency Response stated that the sector cannot relate improvements to any one framework or model because the sector organizations are engaged in numerous concurrent public and private cybersecurity initiatives, each of which could impact cybersecurity to varying degrees. In addition, EPA officials from the Office of Groundwater and Drinking Water stated that most organizations will not be able to link improvements directly to the framework because EPA does not exclusively incorporate the framework into the agency’s sector guidance. The officials added that existing industry standards and best practices are also recognized in the development of EPA cybersecurity guidance. Therefore, although an organization might experience improvements from using elements of the framework, it might not be readily apparent that those improvements came directly from the framework. To provide the sector organizations with access to various framework resources, NIST updated its website to include sector-specific implementation guidance and case studies, as well as insights from organizations using the framework. Five organizations identified the lack of use cases as an impediment to determining improvements. For example, one organization stated that small and medium organizations struggled with identifying improvements from using the framework because of the lack of use cases (examples for how to determine or measure improvements as a result of using the framework). To address the challenge, the organization stated that it would be helpful if NIST, in collaboration with federal and nonfederal entities, would share and provide use cases or direction on common scenarios small and medium organizations faced and how these could be addressed through the framework. NIST officials stated that they were in the early stages of developing a cybersecurity framework starter profile for small organizations. NIST officials stated that they did not have a time frame for completing the profile. However, they added that the profile will aim to identify common solutions to a specific challenge, such as threat surface or cybersecurity challenges in cloud computing, using a customized adaptation of the framework. In addition, DHS created a small and midsize business road map for all critical infrastructure sectors in 2018. The road map provided a guide for small and mid-sized businesses to use in enhancing their cybersecurity posture. The road map also included DHS’s cybersecurity information sharing and collaboration program and secure information sharing portal. The purpose of the information sharing and collaboration program was to enable actionable, relevant, and timely unclassified information exchange through trusted public- private partnerships across all critical infrastructure sectors. In addition, the secure information sharing portal served as a forum to share cybersecurity strategies and insights with the critical infrastructure sectors. Five organizations identified the lack of a centralized information sharing mechanism as an impediment. For example, one organization stated that there is a challenge in sharing information among all critical infrastructure sectors in a more open and non-judgmental way. To address this challenge, the organization stated that it would be helpful to establish a centralized information sharing mechanism to share and exchange information in an anonymous manner. Another organization added that the challenge with determining improvements is that there is no centralized information sharing mechanism to obtain information. The organization added that it would be helpful to see how organizations compare with one another in terms of goals through this type of mechanism. DHS, however, identified its homeland security information network as a tool that was intended to be the primary system used by entities to collaborate to protect critical infrastructure. Officials in DHS’s Stakeholder Engagement and Cyber Infrastructure Resilience division stated that the information in its homeland security information network could be used by all sectors to report on best practices, including sector-wide improvements and lessons learned from using the framework. Although NIST and DHS have identified initiatives to help address the impediments, the SSAs have not reported on sector-wide improvements. Until they do so, the extent to which the 16 critical infrastructure sectors are better protecting their critical infrastructures from threats will be largely unknown. Conclusions Most of the SSAs have not determined the level and type of framework adoption, as we previously recommended. Most of the sectors, however, had efforts underway to encourage and facilitate use of the framework. Even with this progress, implementation of our recommendations is essential to the success of protection efforts. While selected organizations reported varying levels of improvements, the SSAs have not collected and reported sector-wide improvements as a result of framework use. The SSAs and organizations identified impediments to collecting and reporting sector-wide improvements, including the lack of precise measurements of improvement, voluntary nature of the framework, and lack of a centralized information sharing mechanism. However, NIST and DHS have initiatives to help address these impediments. These included an information security measurement program, cybersecurity framework starter profile, information sharing programs, self-assessment tools, and surveys to support SSAs in measuring and quantifying improvements in the protection of critical infrastructure as a result of using the framework. However, NIST has yet to establish time frames for completing the information security measurement program and starter profile. Moreover, the SSAs have yet to report on sector-wide improvements using the initiatives. Until they do so, the critical infrastructure sectors may not fully understand the value of the framework to better protect their critical infrastructures from cyber threats. Recommendations We are making the following 10 recommendations to NIST and the nine sector-specific agencies. The Director of NIST should establish time frames for completing NIST’s initiatives, to include the information security measurement program and the cybersecurity framework starter profile, to enable the identification of sector-wide improvements from using the framework in the protection of critical infrastructure from cyber threats. (Recommendation 1) The Secretary of Agriculture, in coordination with the Secretary of Health and Human Services, should take steps to consult with respective sector partner(s), such as the SCC, DHS, and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 2) The Secretary of Defense should take steps to consult with respective sector partner(s), such as the SCC, DHS, and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 3) The Secretary of Energy should take steps to consult with respective sector partner(s), such as the SCC, DHS, and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 4) The Administrator of the Environmental Protection Agency should take steps to consult with respective sector partner(s), such as the SCC, DHS, and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 5) The Administrator of the General Services Administration, in coordination with the Secretary of Homeland Security, should take steps to consult with respective sector partner(s), such as the Coordinating Council and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 6) The Secretary of Health and Human Services, in coordination with the Secretary of Agriculture, should take steps to consult with respective sector partner(s), such as the SCC, DHS, and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 7) The Secretary of Homeland Security should take steps to consult with respective sector partner(s), such as the SCC and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sectors using existing initiatives. (Recommendation 8) The Secretary of Transportation, in coordination with the Secretary of Homeland Security, should take steps to consult with respective sector partner(s) such as the SCC and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 9) The Secretary of the Treasury should take steps to consult with respective sector partner(s), such as the SCC, DHS, and NIST, as appropriate, to collect and report sector-wide improvements from use of the framework across its critical infrastructure sector using existing initiatives. (Recommendation 10) Agency Comments and Our Evaluation We received comments on a draft of this report from the ten agencies to which we made recommendations—the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Transportation, and the Treasury; and the Environmental Protection Agency and the General Services Administration. Among these agencies, eight agreed with the recommendations, one neither agreed nor disagreed with the recommendation, and one partially agreed with the recommendation. In written comments, the Department of Agriculture generally concurred with the recommendation in our report. The department’s comments are reprinted in appendix II. In written comments, the Department of Commerce concurred with the recommendation in our report. The department stated that the National Institute of Standards and Technology expects to document its cybersecurity measurement program scope, objectives, and approach by about June 2020 and publish two cybersecurity starter profiles by about September 2020. The department’s comments are reprinted in appendix III. In written comments, the Department of Defense concurred with the recommendation in our report and described ongoing steps to evaluate defense organizations’ cybersecurity maturity levels. The department’s comments are reprinted in appendix IV. In written comments, the Department of Energy partially concurred with the recommendation in our report. The department stated that it will coordinate with the energy sector to develop an understanding of sector- wide improvements from use of the framework. The department, however, stated that implementing our recommendation as written prescribes the SCC as a forum for coordination regarding the framework. Our recommendation is not intended to be prescriptive, but rather, to provide suggestions for consideration. Thus, we have revised the wording of the recommendation to emphasize coordination with other entities, as appropriate. The department also stated that the recommendation implies that improvements from the use of the framework could accurately be attributed to a single initiative, which may be misleading. We do not agree. Our report identifies the challenge of determining the direct impact from framework use and notes that NIST’s website provides the sector organizations with access to various framework resources, to include sector-specific implementation guidance and case studies, as well as insights from organizations using the framework. Hence, organizations can report on improvements from use of the framework using multiple initiatives. Further, the department stated that suggesting government collection and reporting of information regarding adoption or improvements erodes the voluntary character of the framework. We do not agree with this statement. Our report recognizes the voluntary character of the framework but also notes that, without collecting and reporting such information, critical infrastructure sectors may not fully understand the benefits and value of the framework to better protect their critical infrastructures from cyber threats. The department’s comments are reprinted in appendix V. In written comments, the Department of Health and Human Services concurred with the recommendation in our report and stated that it would work with the appropriate entities to refine and communicate best practices to the sector. The department’s comments are reprinted in appendix VI. In written comments, the Department of Homeland Security concurred with the recommendation in our report. The department stated that, once it receives the results of the survey on framework adoption that it sent to small- and mid-sized IT sector partners, it will determine the feasibility of issuing similar surveys to other sectors. The department’s comments are reprinted in appendix VII. In written comments, the Department of the Treasury neither agreed nor disagreed with the recommendation in our report. The department stated that it will assess using the identified initiatives and their viability for collecting and reporting sector-wide improvements from use of the framework with input from the SCC and financial regulators. The department added, however, that it does not have the authority to compel financial institutions to respond to inquiries regarding the sector’s use of the framework or resulting improvements. We acknowledge the lack of authority but believe that implementing the recommendation to gain a more comprehensive understanding of the framework’s use by the critical infrastructure sector is essential to the success of protection efforts. The department’s comments are reprinted in appendix VIII. In written comments, the Environmental Protection Agency concurred with the recommendation in our report. The agency stated that it will coordinate with its SCC to investigate options to collect and report sector- wide improvements from use of the cybersecurity framework that are consistent with statutory requirements and the sector's willingness to participate. The agency’s comments are reprinted in appendix IX. In written comments, the General Services Administration concurred with the recommendation in our report and stated that it is working with the Department of Homeland Security to develop a plan to address the recommendation. The agency’s comments are reprinted in appendix X. In comments sent via e-mail, the Department of Transportation’s Director of Audit Relations and Program Improvement stated that the department concurred with the recommendation in our report. In addition to the aforementioned comments, we received technical comments from officials of the Departments of Agriculture, Energy, Health and Human Services, Homeland Security, Transportation, and Treasury. We also received technical comments on the report from the Environmental Protection Agency and General Services Administration. We incorporated the technical comments in the report, where appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Transportation, and Treasury; the Administrators of the Environmental Protection Agency and General Services Administration; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6240 or at dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XI. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine the extent to which (1) agencies with lead roles in critical infrastructure protection efforts, referred to as sector- specific agencies (SSAs), have determined the level and type of National Institute of Standards and Technology Cybersecurity Framework (framework) adoption and (2) implementation of the framework has led to improvements to the protection of critical infrastructure from cyber threats. To address the first objective, we analyzed documentation and evidence, such as implementation guidance and survey instruments that discussed actions federal and nonfederal entities have taken since our report in 2018 to develop methods to determine the level and type of adoption across their sectors, as we previously recommended. These entities included nine SSAs,13 out of the 16 Sector Coordinating Councils (SCC) representing all 16 critical infrastructure sectors established in federal policy, the National Institute of Standards and Technology (NIST), and Information Sharing and Analysis Centers (ISAC). We also analyzed documentation from the SSAs and SCCs, such as the Department of Energy’s Cybersecurity Capability Maturity Model and the Department of the Treasury’s Financial Services Sector Cybersecurity Profile. We compared these to best practices, such as the National Infrastructure Protection Plan and the Standards for Internal Control in the Federal Government to determine efforts to facilitate framework adoption across the sectors. We supplemented our review by interviewing officials from these entities to determine any actions taken to determine framework adoption. In addition, we selected six critical infrastructure sectors identified in the 2018 National Cyber Strategy of the United States of America as having critical infrastructure with the greatest risk of being compromised. The six sectors were (1) communications, (2) financial services, (3) energy, (4) healthcare and public health, (5) information technology, and (6) transportation systems. We asked SCCs, trade associations (e.g., the American Petroleum Institute), and ISACs to provide a list of organizations that were users of the framework. We divided up the list of identified organizations by sector, and we randomly selected one large and one small or medium organization from each sector, resulting in a final list of 12 organizations. We then conducted semi-structured interviews with officials from the selected organizations to understand the extent to which these organizations were using the framework. To address the second objective, we collected and reviewed documentation from NIST and the federal and nonfederal entities, such as NIST’s framework and its April 2019 Roadmap for Improving Critical Infrastructure Cybersecurity, the Department of Homeland Security’s Information Technology Sector Small and Midsize Business Cybersecurity Survey and 2018 Cybersecurity Resources Road Map, and other SSA efforts to determine ongoing efforts to enable the identification and measurement of improvements as a result of using the framework. We compared these efforts to the 2014 Cybersecurity Act and best practices, such as NIST Special Publication 800-55 on performance- based measures to determine the measures the SSAs and SCCs had taken to determine improvements from using the framework. In addition, we interviewed officials from the selected organizations to understand the extent to which they realized improvements as a result of framework adoption and the support the organizations received from federal and nonfederal entities. We also interviewed officials from other federal and nonfederal entities, to include NIST, nine SSAs, 13 of the 16 SCCs, and six ISACs on efforts to measure improvements from use of the framework, and any related challenges. We conducted this performance audit from January 2019 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Agriculture Appendix III: Comments from the Department of Commerce Appendix IV: Comments from the Department of Defense Appendix V: Comments from the Department of Energy Appendix VI: Comments from the Department of Health and Human Services Appendix VII: Comments from the Department of Homeland Security Appendix VIII: Comments from the Department of the Treasury Appendix IX: Comments from the Environmental Protection Agency Appendix X: Comments from the General Services Administration Appendix XI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Neelaxi Lakhmani (assistant director), Kendrick M. Johnson (analyst in charge), Christopher Businsky, Nancy Glover, Douglas Harris, Ceara Lance, Edward Malone, Gabriel Nelson, Harold Podell, and Dana Pon made key contributions to this report.
Why GAO Did This Study Cyber threats to the nation's critical infrastructure (e.g., financial services and energy sectors) continue to increase and represent a significant national security challenge. To better address such threats, NIST developed, as called for by federal law, a voluntary framework of cybersecurity standards and procedures. The Cybersecurity Enhancement Act of 2014 included provisions for GAO to review aspects of the framework. The objectives of this review were to determine the extent to which (1) SSAs have developed methods to determine framework adoption and (2) implementation of the framework has led to improvements in the protection of critical infrastructure from cyber threats. GAO analyzed documentation, such as implementation guidance, plans, and survey instruments. GAO also conducted semi-structured interviews with 12 organizations, representing six infrastructure sectors, to understand the level of framework use and related improvements and challenges. GAO also interviewed agency and private sector officials. What GAO Found Most of the nine agencies with a lead role in protecting the 16 critical infrastructure sectors, as established by federal policy and referred to as sector-specific agencies (SSAs), have not developed methods to determine the level and type of adoption of the National Institute of Standards and Technology's (NIST) Framework for Improving Critical Infrastructure Cybersecurity (framework), as GAO previously recommended. Specifically, two of the nine SSAs had developed methods and two others had begun taking steps to do so. The remaining five SSAs did not yet have methods to determine framework adoption. Most of the sectors (13 of 16), however, noted that they had taken steps to encourage and facilitate use of the framework, such as developing implementation guidance that links existing sector cybersecurity tools, standards, and approaches to the framework. In addition, all of the 12 selected organizations that GAO interviewed described either fully or partially using the framework. Nevertheless, implementing GAO's recommendations to the SSAs to determine the level and type of adoption remains essential to the success of protection efforts. The 12 selected organizations using the framework reported varying levels of resulting improvements. Such improvements included identifying risks and implementing common standards and guidelines. However, the SSAs have not collected and reported sector-wide improvements. The SSAs and organizations identified impediments to doing so, including the (1) lack of precise measurements of improvement, (2) lack of a centralized information sharing mechanism, and (3) voluntary nature of the framework. NIST and the Department of Homeland Security (DHS) have initiatives to help address these impediments. Precise measurements: NIST is in the process of developing an information security measurement program that aims to provide the tools and guidance to support the development of information security measures that are aligned with an individual organization's objectives. However, NIST has not established a time frame for the completion of the measurement program. Centralized sharing: DHS identified its homeland security information network as a tool that was intended to be the primary system that could be used by all sectors to report on best practices, including sector-wide improvements and lessons learned from using the framework. Voluntary nature: In April 2019, NIST issued its NIST Roadmap for Improving Critical Infrastructure Cybersecurity , version 1.1, which included a tool for organizations to self-assess how effectively they manage cybersecurity risks and identify improvement opportunities. While these initiatives are encouraging, the SSAs have not yet reported on sector-wide improvements. Until they do so, the extent to which the 16 critical infrastructure sectors are better protecting their critical infrastructures from threats will be largely unknown. What GAO Recommends GAO is making ten recommendations—one to NIST on establishing time frames for completing selected programs—and nine to the SSAs to collect and report on improvements gained from using the framework. Eight agencies agreed with the recommendations, while one neither agreed nor disagreed and one partially agreed. GAO continues to believe that all ten recommendations are warranted.
gao_GAO-20-129
gao_GAO-20-129_0
Background For over 20 years, Congress has enacted various laws, and federal agencies have issued guidance, that call for agencies to perform workforce planning activities to help ensure the timely and effective acquisition of IT. These laws and guidance focus on the importance of (1) setting the strategic direction for workforce planning, (2) analyzing the workforce to identify skill gaps, (3) developing strategies to address skill gaps, and (4) monitoring and reporting on progress in addressing skill gaps. For example: The Clinger-Cohen Act of 1996 requires agency chief information officers (CIO) to annually (1) assess the requirements established for agency personnel regarding knowledge and skills in information resource management and the adequacy of such requirements for facilitating the achievement of performance goals; (2) assess the extent to which the positions and personnel at executive and management levels meet those requirements; (3) develop strategies and specific plans for hiring, training, and professional development to address any deficiencies; and (4) report to the head of the agency on the progress made in improving information resources management capability. The E-Government Act of 2002 requires the Director of OPM, in consultation with the Director of OMB, the CIO Council, and the Administrator of General Services to (1) analyze, on an ongoing basis, the personnel needs of the federal government related to IT and information resource management; and (2) identify where current IT and information resource management training do not satisfy personnel needs. In addition, the law requires the Director of OMB to ensure that agency heads collect and maintain standardized information on their IT and information resources management workforce. In 2010, OMB issued its 25-point plan for IT reform and outlined several action plans to build workforce capabilities, including capabilities for acquisition and program management. For example, the plan directed OPM to create a specialized career path for IT program managers. In addition, OMB stated that it would work with OPM to provide agencies with direct hiring authority for program managers. OMB also tasked agencies with identifying program management competency gaps and reporting to OMB on those gaps. Subsequent to the 25-point plan, in July 2011, OMB released guidance for agencies to develop specialized IT acquisition cadres. Among other things, this guidance required agencies to analyze current acquisition staffing challenges; determine if developing or expanding the use of cadres would improve program results; and outline a plan to pilot or expand cadres for an especially high-risk area, if the agency determined that such an effort would improve performance. Further, in November 2011, OPM issued guidance for developing career paths for IT program managers. OPM’s career path guide was to build upon its IT Program Management Competency Model released in July 2011 by serving as a roadmap for individuals interested in pursuing a career in this area. In addition, the roadmap was to provide employees and their supervisors with a single-source reference to determine appropriate training opportunities for career advancement. In December 2014, Congress enacted legislation commonly referred to as FITARA. Among other things, the law aims to ensure timely progress by federal agencies toward developing, strengthening, and deploying IT acquisition cadres consisting of personnel with highly specialized skills in IT acquisition, including program and project managers. Almost all of the 24 CFO Act agencies (other than the Department of Defense (Defense)) are required to update their annual acquisition human capital plans to address how they are meeting their human capital requirements to support timely and effective acquisitions. To assist agencies in implementing the provisions of FITARA and to build upon agency responsibilities under the Clinger-Cohen Act of 1996, OMB issued guidance to agencies in June 2015. In doing so, OMB directed agencies (other than Defense) to, among other things, (1) develop a set of competency requirements for staff, including leadership positions; and (2) develop and maintain a current workforce planning processes to ensure that agencies can anticipate and respond to changing mission requirements, maintain workforce skills in a rapidly developing environment, and recruit and retain the talent needed to accomplish their missions. Each agency is to conduct an annual self-assessment of its conformity with these requirements and develop an implementation plan describing the changes it will make. The Federal Cybersecurity Workforce Assessment Act of 2015 required OPM, with support from the National Institute of Standards and Technology, to establish a coding structure to be used in identifying all federal civilian and noncivilian positions that require the performance of IT, cybersecurity, or other cyber-related functions. The act also required agencies, in consultation with OPM, the National Institute of Standards and Technology, and the Department of Homeland Security (DHS), to then utilize this coding structure to annually assess, among other things, the IT, cybersecurity, and other cyber-related work roles of critical need in their workforce. In April 2016, OPM issued an update to agency chief human capital officers stating that it had recently revalidated the need to continue working to close skill gaps in certain government-wide high-risk mission critical occupations, including those in the cybersecurity and the science, technology, engineering and mathematics functional area. OMB released its Federal Cybersecurity Workforce Strategy in July 2016. Among other things, the strategy cited the need for agencies to examine specific IT, cybersecurity, and cyber-related work roles, and identify personnel skills gaps, rather than merely examining the number of vacancies by job series. The strategy identified several actions that agencies could take to identify workforce needs, expand the cybersecurity workforce through education and training, recruit and hire highly skilled talent, and retain and develop highly skilled talent. In July 2016, OMB issued updated policy for the planning, budgeting, governance, acquisition, and management of federal information, personnel, equipment, funds, IT resources, and supporting infrastructure and services. Among other things, OMB’s updated policy requires an agency’s chief human capital officer, CIO, chief acquisition officer, and senior agency official for privacy to develop a set of competency requirements for staff and develop and maintain a current workforce planning process. Further, in September 2016, OPM updated its guidance regarding the annual submission of agencies’ mission critical occupation resource charts. These charts are to identify current staffing levels, staffing targets, projected attrition, actual attrition, and retirement eligibility in government-wide and selected agency-specific mission critical occupations. While these laws and guidance focus on IT workforce planning, other broader initiatives have also been undertaken to improve federal human capital management. For example, we and OPM have developed human capital management models that call for implementing workforce planning practices that can facilitate the analysis of gaps between current skills and future needs. In addition, the models call for the development of strategies for filling the gaps, as well as planning for succession. Further, our Standards for Internal Control in the Federal Government stress that management should consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. Based on the aforementioned laws, guidance, and initiatives, in November 2016, GAO issued an evaluation framework to support the assessment of whether selected federal agencies are adequately assessing and addressing gaps in IT knowledge and skills. The framework identifies four workforce planning steps and supporting activities that address (1) setting the strategic direction for IT workforce planning, (2) analyzing the IT workforce to identify competency and staffing gaps, (3) developing and implementing strategies to address the gaps, and (4) monitoring and reporting progress in addressing the gaps. GAO Previously Reported on Shortfalls in Federal IT Workforce Planning We have previously reported that effectively addressing mission critical skill gaps in IT requires a multifaceted response from OPM and agencies. Specifically, our high-risk update in February 2013 noted that OPM and agencies would need to use a strategic approach that (1) involves top management, employees, and other stakeholders; (2) identifies the critical skills and competencies that will be needed to achieve current and future programmatic results; (3) develops strategies that are tailored to address skill gaps; (4) builds the internal capability needed to address administrative, training, and other requirements important to support workforce planning strategies; and (5) includes plans to monitor and evaluate progress toward closing skill gaps and meeting other human capital goals using a variety of appropriate metrics. In January 2015, we reported that the Chief Human Capital Officers Council had identified skill gaps in six government-wide occupations including IT/cybersecurity and contract specialist/acquisition. We noted, however, that the effort had shortcomings, and that it would be important for the council to use lessons learned from these initial efforts to inform subsequent ones to identify skill gaps. We also reported that key features of OPM’s efforts to predict emerging skill gaps beyond those already identified were in the early planning stages, and OPM and selected agencies could improve the manner in which they address skill gaps by strengthening their use of quarterly data-driven reviews. Further, we reported that individual agencies across the federal government have not always effectively planned for IT workforce challenges. For example, In May 2014, we concluded that the Social Security Administration’s (SSA) IT human capital program had identified skills and competencies to support certain workforce needs, but lacked adequate planning for the future. The agency had developed IT human capital planning documents, such as an Information Resources Management plan, and skills inventory gap reports that identified near-term needs, such as skill sets for the following 2 years. Nevertheless, SSA had not adequately planned for longer-term needs because its human capital planning and analysis were not aligned with long-term goals and objectives and the agency did not have a current succession plan for its IT efforts. Accordingly, we recommended that SSA identify long-term IT needs in its updated human capital operating plan. The agency agreed with, and subsequently implemented the recommendation. In August 2016, we determined that the Department of Veterans Affairs (VA) had performed key steps, such as documenting an IT human capital strategic plan and regularly analyzing workforce data. However, the agency had not tracked and reviewed historical and projected leadership retirements and had not identified gaps in future skill areas. We recommended that the agency track and review historical workforce data and projections related to leadership retirements, and identify IT skills needed beyond the current fiscal year, to assist in identifying future skills gaps. The agency concurred with our recommendations and has partially implemented them by identifying the IT skills it needed beyond the current fiscal year. In November 2016, as a part of the review in which we developed the IT workforce planning framework discussed previously, we assessed five agencies—the Departments of Commerce (Commerce), Defense, Transportation (Transportation), the Treasury (Treasury), and Health and Human Services (HHS)—against the eight key workforce planning activities. While all five agencies had demonstrated important progress in either partially or fully implementing key workforce planning activities, each had shortfalls. For example, only one agency (Defense) had implemented a workforce planning process, none had identified IT competency gaps for their entire workforce, and three (Defense, Transportation, and Treasury) were performing some level of monitoring toward the closure of identified skill gaps. We reported that, until the agencies fully implemented key workforce planning activities, they would have a limited ability to assess and address gaps in knowledge and skills that are critical to the success of major IT acquisitions. As a result, we recommended that the agencies implement the eight IT workforce planning activities to facilitate the analysis of gaps between current skills and future needs, the development of strategies for filling the gaps, and succession planning. Defense partially agreed with our recommendations and the other four agencies agreed with our recommendations. An updated assessment of actions to implement our recommendations is described in our evaluation of agencies’ implementation of key IT workforce planning activities in appendix II. In May 2018, as part of a review of the National Aeronautics and Space Administration’s (NASA) approach to overseeing and managing IT, we found that the agency had partially implemented five of the eight key IT workforce planning activities and had not implemented three. For example, NASA had not assessed competency and staffing needs regularly or reported progress to agency leadership. We reported that, until the agency implemented the key IT workforce planning activities, it would have difficulty anticipating and responding to changing staffing needs. As a result, we recommended that NASA fully implement the eight key IT workforce planning activities. The agency disagreed with our recommendation stating that its workforce improvement activities were already underway. Nevertheless, implementing the workforce planning activities discussed in this report could enhance and complement the agency’s ongoing and future efforts. In a June 2018 report on the progress of agencies’ efforts to implement the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015, we noted that most CFO Act agencies had developed baseline assessments to identify cybersecurity personnel within their agencies that held certifications. However, because agencies had not consistently defined the workforce and the National Initiative for Cybersecurity Education had not developed a list of appropriate certifications, efforts such as conducting the baseline assessment to determine the percentage of cybersecurity personnel that hold appropriate certifications had yielded inconsistent and potentially unreliable results. Further, we reported that, while most CFO Act agencies had developed procedures for assigning cybersecurity codes to positions, several agencies had not addressed activities required by OPM to implement the requirements of the Federal Cybersecurity Workforce Assessment Act. As a result, we made 30 recommendations to 13 agencies to develop and submit their baseline assessments and to fully address the required activities in OPM’s guidance in their procedures for assigning employment codes to cybersecurity positions. Of the 13 agencies, seven agreed with the recommendations made to them, four did not state whether they agreed or disagreed, one agency agreed with one of the two recommendations made to it, and one did not provide comments on the report. As of July 2019, the agencies had implemented 20 of the recommendations. In August 2018, as part of a government-wide review of CIO responsibilities, we reported that CIOs are responsible for assessing agency IT workforce needs and developing strategies and plans for meeting those needs. However, we noted that the majority of the agencies minimally addressed or did not address the role of their CIOs in the area of IT workforce and reported major challenges related to their IT workforce. Specifically, 19 agencies’ policies had not addressed their CIOs’ role in conducting annual assessments of IT management and skill requirements and the remaining five agencies had only partially addressed this responsibility. We noted that the shortcomings in agencies’ policies were attributable, at least in part, to incomplete guidance from OMB. Consequently, we recommended that OMB issue guidance that addresses the IT workforce responsibilities of CIOs that were not included in existing guidance. OMB partially agreed with the recommendation and has not yet implemented it. We also recommended that 24 agencies ensure that their IT management policies address the role of their CIOs in the IT workforce management area. Of the 24 agencies, 14 agreed with the recommendations, five had no comments, five partially agreed, and one disagreed. We are monitoring the status of the agencies’ actions to implement our recommendations. In March 2019, as part of an update on the status of agencies’ progress in implementing the requirements of the Federal Cybersecurity Workforce Assessment Act, we reported, among other things, that most of the 24 CFO Act agencies had not completely or accurately categorized work roles for IT positions within the 2210 IT management occupational series (IT management). The agencies reported that this was, in part, because they may have assigned the associated codes in error or had not completed validating the accuracy of the assigned codes. We noted that, by assigning work roles that are inconsistent with the IT, cybersecurity, and cyber-related positions, the agencies were diminishing the reliability of the information they needed to improve workforce planning. We made recommendations to 22 agencies to take steps to address the inaccuracies. Of these agencies, 20 agreed with the recommendations, one partially agreed, and one did not agree with one of the two recommendations. As of August 2019, three of the agencies have implemented their recommendation, and two of the agencies have implemented one of their two recommendations. We continue to believe that all of the recommendations are warranted. Agencies Had Mixed Progress Implementing IT Workforce Planning Activities As previously noted, GAO issued an IT workforce planning framework that includes eight key activities, based on federal laws, guidance, and best practices. Implementing these activities is critical to adequately assessing and addressing gaps in IT knowledge, skills, and abilities that are needed to execute a range of management functions that support agencies’ missions and goals. The eight key workforce planning activities are identified in table 1. None of the 24 agencies that we reviewed had fully implemented all eight IT workforce planning activities. In this regard, nearly all of the agencies had partially implemented, substantially implemented, or fully implemented three of the workforce planning activities (develop competency and staffing requirements, assess competency and staffing needs regularly, and assess gaps in competencies and staffing). However, most agencies had minimally implemented or did not implement the five other workforce planning activities (including efforts to establish a workforce planning process and address staffing gaps). Figure 1 shows the agencies’ overall implementation of each of the eight key IT workforce planning activities, as of May 2019. Further, some agencies had made more progress than others. Specifically, while five agencies (Defense, Department of State (State), VA, Small Business Administration (SBA), and SSA) fully implemented or substantially implemented three or more activities, 11 agencies did not fully implement any of the activities, and 15 agencies did not implement three or more activities. Figure 2 identifies the extent to which each of the 24 agencies had implemented the eight workforce planning activities. In addition, appendix II provides our assessment of each agency’s implementation of the activities. Only One Agency Fully Established and Maintained a Workforce Planning Process To fully implement the establish and maintain an IT workforce planning process activity, an agency should have a documented IT workforce planning process that describes how the agency will implement key IT workforce planning activities, including those identified in the IT workforce planning framework. The process should also define the CIO’s and others’ roles and responsibilities for implementing the activities; align with mission goals and objectives; and address both the agency-level and component-level workforce, including how the agency is to maintain visibility and oversight into component-level workforce planning efforts (as applicable). In addition, the agency should periodically update the process. Only one of the 24 CFO Act agencies had fully implemented this activity. Specifically, one agency had fully implemented the activity (Nuclear Regulatory Commission (NRC)); one agency had substantially implemented the activity (Defense); two agencies had partially implemented the activity (Department of Housing and Urban Development (HUD), and SBA); 12 agencies had minimally implemented the activity (U.S. Department of Agriculture (Agriculture), Commerce, Department of Energy (Energy), HHS, DHS, Department of the Interior (Interior), Department of Labor (Labor), State, Transportation, Treasury, VA, and SSA); and eight agencies did not implement the activity (Department of Education (Education), Department of Justice (Justice), Environmental Protection Agency (EPA), General Services Administration (GSA), NASA, National Science Foundation (NSF), OPM, and U.S. Agency for International Development (USAID)). NRC fully implemented the activity. In February 2016, NRC developed a strategic workforce plan that addressed all key IT workforce planning activities in our framework. In addition, the process was aligned with the agency’s goals and objectives. Further, the process included general roles and responsibilities, including for the Office of the Chief Human Capital Officer, Senior Management, and its component offices. Moreover, the agency’s Management Directive 9.22 further defined the Chief Information Officer’s roles and responsibilities with regards to IT workforce planning. In addition, NRC has periodically updated the process. For example, the agency updated the process in July 2017 to better integrate its workload projection, skills identification, human capital management, individual development, and workforce management activities. Defense substantially implemented the activity. The agency’s June 2018 Human Capital Operating Plan addressed how Defense plans to implement the workforce planning activities for its functional communities, including the IT functional community. In addition, the plan defined the CIO’s roles and responsibilities and was aligned with the agency’s goals and objectives. Further, the plan documented how the agency will maintain oversight of and visibility into functional community planning efforts. However, it called for the functional communities to develop strategic workforce plans to further define their workforce planning process and the IT functional community has not yet completed its plan or provided a time frame for completion. With respect to maintaining the process, Defense periodically updated its IT workforce process—the June 2018 plan replaced the process identified in the agency’s previous workforce plans. SBA partially implemented the activity. In April 2018, SBA released its IT Workforce Plan for fiscal years 2018 through 2020 that addressed how the agency intends to implement all of its IT workforce planning activities, and was aligned with the agency’s mission goals and objectives. In addition, in April 2018, the agency released its IT Change Management and Communication Plan that defined the CIO’s IT workforce planning roles and responsibilities and was aligned with the agency’s mission goals and objectives. However, as it is a new process, SBA had not updated it as of May 2019. Interior minimally implemented the activity. Interior issued a policy in 2016 that directed its bureaus to develop IT workforce plans, which the agency stated that it intends to use to develop an agency-wide IT workforce plan. The policy identified efforts that should be addressed in the plans, including most of the IT workforce planning activities. However, as of May 2019, the bureaus’ plans and the agency-wide plan had not been completed. Officials in the Office of the CIO stated that they expect to finalize all of the plans by the end of fiscal year 2019. GSA did not implement the activity. Officials in the Human Capital Strategic Planning Division stated that GSA followed the process described in OPM’s IT workforce planning guidance; however, the agency did not document this in policy and had not developed any other documentation to guide its implementation of workforce planning activities. Most Agencies at Least Partially Developed Competency and Staffing Requirements for Their IT Staff To fully implement the develop competency and staffing requirements activity, an agency should develop a set of competency requirements for all or most of its IT workforce, including leadership positions. In addition, the agency should develop staffing requirements, which include projections over several years. Most of the agencies had fully or substantially developed competency and staffing requirements. Specifically, 12 agencies had fully implemented the activity (Defense, Education, HUD, State, Transportation, Treasury, VA, GSA, NASA, SBA, SSA, and USAID), four agencies had substantially implemented the activity (Agriculture, Commerce, HHS, and DHS), and eight agencies had partially implemented the activity (Energy, Interior, Justice, Labor, EPA, NSF, NRC, and OPM). State fully implemented the activity. State developed competency requirements for its IT workforce, including for both its foreign and civil services. In addition, State developed staffing requirements for its IT staff, including projections over several years. Specifically, it developed staffing requirements for its mission critical occupations, which include IT management, in response to OPM’s requirement to submit this information annually. DHS substantially implemented the activity. DHS developed competency requirements for two of the agency’s four IT functional groups. According to officials in the Office of the CIO, the agency expects to finalize competency requirements for the remaining two groups by the end of fiscal year 2019. In addition, DHS developed staffing requirements for its IT staff, including projections over several years. Specifically, it developed staffing requirements for its mission critical occupations, which include IT management, in response to OPM’s requirement to submit this information annually. OPM partially implemented the activity. OPM did not develop competency requirements. However, the agency developed staffing requirements for its IT staff, including projections over several years. Specifically, it developed staffing requirements for its mission critical occupations, which include IT management. Most Agencies Periodically Assessed IT Staffing Needs, but Not Competency Needs To fully implement the assess competency and staffing needs regularly activity, an agency should periodically assess competency needs for all or most of its IT workforce. In addition, the agency should periodically assess staffing needs for all or most of its IT workforce. Most of the agencies periodically assessed staffing needs, but did not assess competency needs. Specifically, three agencies had fully implemented the activity (Defense, VA, and SSA); 20 agencies had partially implemented the activity by periodically assessing IT staffing needs; however, these agencies did not periodically assess competency needs (Agriculture, Commerce, Education, Energy, HHS, DHS, HUD, Interior, Justice, Labor, State, Transportation, Treasury, GSA, NASA, NSF, NRC, OPM, SBA, and USAID); and one agency did not implement the activity (EPA). VA fully implemented the activity. VA assessed competency needs annually as a part of its professional development planning process. For example, the agency performed an assessment in fiscal year 2017, which led it to add project management as a competency for all IT staff. In addition, in fiscal year 2018, VA’s assessment resulted in adding two new competencies—data analytics and risk management. Further, VA annually assessed staffing needs for its IT staff in response to the annual OPM reporting requirement to do so. Commerce partially implemented the activity. The agency initially developed its competency requirements in January 2016, but had not since updated its needs. On the other hand, Commerce annually assessed staffing needs for its IT staff in response to the OPM reporting requirement to do so. EPA did not implement the activity. EPA did not develop competency needs for its IT workforce. In addition, the agency could not provide documentation showing that it had regularly assessed staffing needs for its IT staff. Most Agencies Took Steps to Assess Competency and Staffing Gaps To fully implement the assess gaps in competencies and staffing activity, an agency should periodically assess gaps in competencies for all or most of its IT workforce. Further, the assessment should be performed based on the agency’s current competency needs. In addition, the agency should periodically assess gaps in staffing for all or most of its IT workforce. Most agencies took steps to assess competency and staffing gaps. Specifically, two agencies had fully implemented the activity (VA and SSA); nine agencies had substantially implemented the activity (Agriculture, Defense, DHS, HUD, State, Transportation, GSA, NASA, and SBA); 12 agencies had partially implemented the activity by periodically assessing IT staffing gaps, but not periodically assessing competency gaps (Commerce, Education, Energy, HHS, Interior, Justice, Labor, Treasury, NSF, NRC, OPM, and USAID); and one agency had minimally implemented the activity (EPA). SSA fully implemented the activity. SSA assessed gaps in its competencies for its IT management staff biennially starting in fiscal year 2014. In addition, SSA annually assessed staffing gaps for its IT staff in response to the OPM reporting requirement. HUD substantially implemented the activity. HUD assessed competency gaps for its IT management staff biennially; it began doing so in fiscal year 2014. However, HUD did not assess competency needs regularly; thus, it could not ensure that the gap assessments reflect the agency’s current competency needs. HUD annually assessed staffing gaps for its IT staff in response to the OPM reporting requirement. Education partially implemented the activity. Education did not assess gaps in competencies for its IT staff. However, the agency annually assessed staffing gaps for its IT staff in response to the OPM reporting requirement. EPA minimally implemented the activity. EPA did not assess competency gaps because, as previously stated, the agency did not develop competency requirements. In addition, while EPA assessed staffing gaps in 2018, it did not provide documentation showing that it had assessed staffing gaps prior to or since then. Most Agencies Did Not Develop Strategies and Plans to Address Competency and Staffing Gaps To fully implement the develop strategies and plans to address gaps in competencies and staffing activity, an agency should develop strategies and plans, including specific actions and milestones, to address identified competency gaps. In addition, the agency should develop strategies and plans, including specific actions and milestones, to address identified staffing gaps. Most agencies did not develop strategies and plans to address competency and staffing gaps. Specifically, four agencies had substantially implemented the activity (Defense, State, VA, and SBA), one agency had partially implemented the activity (Agriculture), six agencies had minimally implemented the activity (HUD, Transportation, EPA, GSA, SSA, and USAID), and 13 agencies did not implement the activity (Commerce, Education, Energy, HHS, DHS, Interior, Justice, Labor, Treasury, NASA, NSF, NRC, and OPM). State substantially implemented the activity. State identified strategies to address high-priority IT competency gaps, including developing additional training, conducting quarterly reviews of IT workforce issues, and improving hiring processes; however, it had not developed plans, including actions and milestones, for how it would carry out the strategies. With respect to staffing, State identified strategies and plans to address them in its Five-Year Workforce and Leadership Success Plan for Fiscal Years 2016 through 2020. For example, State identified using special hiring initiatives, such as its Pathways Programs, to address staffing gaps. In addition, State developed the Foreign Affairs IT Fellowship Program, which is intended to recruit students by offering internships. Agriculture partially implemented the activity. In 2019, Agriculture developed strategies, which included providing training and developing career paths, to address competency gaps identified for two of 13 IT functional roles; however, the agency did not develop associated plans, including actions and milestones. Further, Agriculture did not develop strategies to address gaps for the other 11 IT functional roles because the agency had not assessed gaps for those roles. With respect to staffing, in 2019, Agriculture identified strategies to address staffing gaps identified for two of its IT functional roles, including collaborating with universities. However, it did not develop plans to carry out the strategies. In addition, Agriculture did not develop strategies and plans to address gaps in staffing for its other 11 IT functional roles. HUD minimally implemented the activity. HUD’s Office of the CIO developed a training plan for fiscal years 2017 through 2018, which identified training courses to address specific technical competency gaps. However, HUD has not updated its competency needs regularly to ensure that the plan and underlying gap assessment reflect the agency’s current competency needs. With respect to staffing, HUD did not develop strategies and plans to address gaps. DHS did not implement the activity. DHS did not develop strategies and plans to address either competency or staffing gaps. Most Agencies Minimally Implemented Strategies and Plans to Address Specific Gaps To fully implement the implement activities that address gaps activity, an agency should execute its strategies and plans to address identified gaps in competencies and staffing. In addition, the agency should implement other efforts to assist with addressing competency and staffing needs, including the following efforts identified in FITARA: IT acquisition cadres, cross-functional training of acquisition and program personnel, career paths for program managers, plans to strengthen program management, and the use of special hiring authorities. Most of the agencies minimally implemented strategies and plans to address competency and staffing gaps. Specifically, two agencies had substantially implemented this activity (Defense and VA), seven agencies had partially implemented the activity (HHS, DHS, State, Treasury, SBA, SSA, and USAID), and 15 agencies had minimally implemented the activity by implementing workforce efforts identified in FITARA, but not implementing strategies and plans to address its identified competency and staffing gaps primarily because they had not developed strategies and plans to address identified gaps (Agriculture, Commerce, Education, Energy, HUD, Interior, Justice, Labor, Transportation, EPA, GSA, NASA, NSF, NRC, and OPM). VA substantially implemented the activity. VA implemented strategies and plans to address gaps in competencies. For example, in its Office of Information and Technology Training Gap Analysis report, VA identified actions taken to address the prior year’s competency gaps. These actions included developing additional training courses, as well as providing on-the-job training activities. However, VA did not provide documentation showing that it had implemented strategies and plans to address identified staffing gaps. With respect to the efforts identified in FITARA that can assist with addressing competency and staffing needs, VA implemented an IT acquisition cadre, developed plans to strengthen program management, developed a career path for program managers, and used special hiring authorities to hire IT staff. SSA partially implemented the activity. SSA implemented strategies to address gaps in competencies. For example, according to its gap closure report, the agency closed competency gaps by providing training to existing staff, hiring new staff, and hiring contractors with needed skills. However, SSA did not implement strategies and plans to address staffing gaps because it had not yet developed them. With respect to the efforts identified in FITARA that can assist with addressing competency and staffing needs, SSA used special hiring authorities to hire eight IT specialists in fiscal year 2018. However, SSA did not implement others, including IT acquisition cadres, cross- functional training of acquisition and program personnel, career paths for program managers, and plans to strengthen program management. GSA minimally implemented the activity. GSA did not develop strategies and plans to address identified gaps in competencies or staffing. With respect to the efforts identified in FITARA that can assist with addressing competency and staffing needs, GSA implemented efforts to provide cross-functional training for acquisition and program personnel and used special hiring authorities to hire IT staff. However, the agency did not implement others, including plans to strengthen program management or career paths for program managers. Most Agencies Did Not Establish Processes for Monitoring Progress in Addressing Gaps To fully implement the monitor the agency’s progress in addressing competency and staffing gaps activity, an agency should track progress in implementing strategies and plans to address competency gaps. In addition, the agency should track progress in implementing strategies and plans to address staffing gaps. Most agencies did not establish processes for monitoring progress in addressing competency and staffing gaps. Specifically, three agencies had partially implemented the activity (Defense, VA, and SBA), five agencies had minimally implemented the activity (HUD, State, Transportation, SSA, and USAID), and 16 agencies did not implement the activity (Agriculture, Commerce, Education, Energy, HHS, DHS, Interior, Justice, Labor, Treasury, EPA, GSA, NASA, NSF, NRC, and OPM). SBA partially implemented the activity. SBA established an IT Workforce Steering Committee which monitored progress made in implementing, among other things, strategies and plans to address competency and staffing gaps. However, the agency did not monitor whether the strategies and plans led to a closure in gaps. State minimally implemented the activity. While State monitored its progress in implementing recommended actions to address competency gaps, the agency did not monitor whether the actions led to closing gaps. With respect to staffing, State did not monitor progress in addressing gaps because it did not develop strategies and plans to close staffing gaps. GSA did not implement the activity. GSA did not track progress in addressing competency gaps because the agency did not assess competencies to identify such gaps. Further, GSA did not monitor its progress in addressing staffing gaps because it did not develop strategies and plans to close the gaps. Most Agencies Did Not Establish Processes for Reporting Progress in Addressing Gaps in Competencies and Staffing To fully implement the report to agency leadership on progress activity, an agency should periodically report to agency leadership on progress in implementing strategies and plans to address gaps in competencies. In addition, the agency should periodically report to leadership on progress in implementing strategies and plans to address gaps in staffing. However, most of the agencies did not establish processes for reporting their progress in addressing competency and staffing gaps. Specifically, three agencies had partially implemented the activity (Defense, VA, and SBA), three had minimally implemented the activity (HUD, SSA, and USAID), and 18 did not implement the activity (Agriculture, Commerce, Education, Energy, HHS, DHS, Interior, Justice, Labor, State, Transportation, Treasury, EPA, GSA, NASA, NSF, NRC, and OPM). VA partially implemented the activity. VA reported to agency leadership on its progress in addressing competency gaps, including the closure of gaps, and the actions planned and taken to address the gaps. However, VA did not report on progress in addressing staffing gaps because it did not implement strategies and plans to address such gaps. HUD minimally implemented the activity. HUD reported to agency leadership on the closure of competency gaps from fiscal year 2014 through fiscal year 2016. However, the agency did not monitor or report on its progress in implementing strategies and plans to address gaps in competencies. With respect to staffing, HUD did not report on its progress in addressing gaps because it did not implement strategies and plans to close staffing gaps. DHS did not implement the activity. DHS did not periodically report to agency leadership on its progress in addressing competency or staffing gaps. The agency did not do so because it did not develop strategies and plans to address competency and staffing gaps. Agencies Identified Various Factors That Limited Implementation of Key IT Workforce Planning Activities Agency officials cited various factors that limited their progress in implementing the key IT workforce planning activities. For example, six agencies, including DHS and NRC, reported that they had not completed key activities because they were reliant on finishing other prerequisite activities. For example, officials in DHS’s Office of the CIO stated that they had not updated their IT competency needs because they had not yet finished identifying competency requirements for all of the agency’s role-based groups; four agencies, including HHS and NASA, reported that they had other workforce related priorities, including those related to the Cybersecurity Workforce Assessment Act; three agencies, including GSA and USAID, reported that they lacked resources to perform the activities; and two agencies (OPM and Interior) reported that leadership turnover affected their implementation of workforce planning activities. Until agencies make it a priority to implement all of the key IT workforce planning activities, they will likely have a limited ability to assess and address gaps in the knowledge and skills that are critical to the success of major acquisitions. As a result, it will be difficult for agencies to anticipate and respond to changing staffing needs and control human capital risks when developing, implementing, and operating critical IT systems. Conclusions The majority of the agencies made significant progress implementing three activities—develop competency and staffing requirements, assess competency and staffing needs regularly, and assess gaps in competencies and staffing—and in doing so took important steps towards identifying the workforce they need to help them achieve their mission, and the gaps that need to be addressed. In contrast, most agencies only minimally implemented or did not implement the remaining five activities, increasing the risk that they will not address the gaps. Agencies’ limited implementation of the IT workforce planning activities has been due, in part, to not making IT workforce planning a priority, despite the laws and guidance which have called for them to do so for over 20 years. Until this occurs, agencies will likely not have the staff with the necessary knowledge, skills, and abilities to support the agency’s mission and goals. Recommendations for Executive Action We are making a total of 18 recommendations to federal agencies—one recommendation to 18 agencies. The Secretary of Agriculture should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 1) The Secretary of Education should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 2) The Secretary of Energy should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 3) The Secretary of Homeland Security should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 4) The Secretary of Housing and Urban Development should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 5) The Secretary of the Interior should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 6) The Attorney General should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 7) The Secretary of Labor should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 8) The Secretary of State should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 9) The Secretary of Veterans Affairs should ensure that the agency fully implements each of the five key IT workforce planning activities it did not fully implement. (Recommendation 10) The Administrator of the Environmental Protection Agency should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 11) The Administrator of the General Services Administration should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 12) The Director of the National Science Foundation should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 13) The Chairman of the Nuclear Regulatory Commission should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 14) The Director of the Office of Personnel Management should ensure that the agency fully implements each of the eight key IT workforce planning activities it did not fully implement. (Recommendation 15) The Administrator of the Small Business Administration should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 16) The Commissioner of the Social Security Administration should ensure that the agency fully implements each of the five key IT workforce planning activities it did not fully implement. (Recommendation 17) The Administrator of the U.S. Agency for International Development should ensure that the agency fully implements each of the seven key IT workforce planning activities it did not fully implement. (Recommendation 18) We are not making new recommendations to six agencies—Commerce, Defense, HHS, Transportation, Treasury, and NASA—because we previously made recommendations to these agencies to address the key IT workforce planning activities. Agency Comments and Our Evaluation We provided a draft of the report to the 24 CFO Act agencies for their review and comment. Of the 18 agencies to which we made a recommendation in this report, 13 agencies (Energy, DHS, HUD, Interior, Labor, State, VA, GSA, NSF, OPM, SBA, SSA, and USAID) agreed with the recommendation; one agency (Education) partially agreed with the recommendation; three agencies (Agriculture, Justice, EPA) neither agreed nor disagreed with the recommendation; and one agency (NRC) did not agree with our findings. We also received technical comments from a number of the agencies, which we have incorporated into the report, as appropriate. In addition, of the six agencies to which we did not make recommendations in this report, two (Defense and the Treasury) provided comments on the report and the remaining four (Commerce, HHS, Transportation, and NASA) responded that they did not have any comments on the report. The following 13 agencies agreed with our recommendations: In written comments (reprinted in appendix III), Energy concurred with our recommendation. The agency stated that it plans to fully implement all of the IT workforce planning activities, and described recently completed and intended efforts to do so. For example, the agency stated that it completed the development of competency and staffing requirements in May 2019. In addition, the agency said it expects to finish developing an IT workforce planning process in December 2020. While the efforts described represent positive steps toward fully implementing the IT workforce planning activities, Energy did not provide supporting documentation for the activities it said were completed. As a result, we did not change our ratings for these activities. In its written comments (reprinted in appendix IV), DHS concurred with our recommendation and stated that it remains committed to fully implementing all of the IT workforce planning activities. Further, the agency stated that it had completed developing competency requirements and assessing gaps for its two remaining IT role based groups. However, the agency did not provide documentation to support its completion of these activities. As a result, we did not change our ratings for the activities. DHS also stated that the Office of the Chief of Staff Workforce Engagement Division, within the Office of the CIO, plans to work with the agency’s Chief Information Officer Council and the Office of Chief Human Capital Officer to form an integrated project team by January 30, 2020. According to DHS, this project team will be charged with discussing the agency’s IT workforce planning strategy and outlining an action plan to ensure the strategy addresses all of the key IT workforce planning activities. DHS also provided technical comments which we incorporated, as appropriate. In written comments (reprinted in appendix V), HUD concurred with our recommendation and stated that it plans to fully implement the remaining workforce planning activities. In its written comments (reprinted in appendix VI), Interior agreed with our recommendation. The agency stated that it has begun taking steps to implement the IT workforce planning activities and plans to fully implement the remaining activities. In its written comments (reprinted in appendix VII), Labor concurred with our recommendation. The agency stated that it had made significant progress since the completion of our review and had fully implemented seven of the eight IT workforce planning activities. For example, the agency described efforts to review position descriptions, including identifying key IT competency areas. In addition, the agency stated that it assessed competency and skills needs, and critical IT skill gaps, as part of an IT workforce supply analysis. Further, Labor stated that, in June 2019, it developed hiring approval and prioritization templates, which require a current workforce and competency assessment, and identified IT competencies with each hiring request. The agency added that hiring managers perform a job analysis prior to posting open positions, and that this includes identifying key IT competencies for each position. Moreover, Labor stated that, in June 2019, the Secretary approved the use of direct hire authority for IT Specialists. In addition, the agency said that the Office of the CIO and the Chief Human Capital Officer finalized an action plan in March 2019 that identified strategies to address IT workforce gaps. Further, it stated that progress had been monitored in weekly discussions with and oversight from the Chief Information Officer and Chief Human Capital Officer. However, while the actions described indicate progress toward fully implementing the workforce planning activities, the agency did not provide evidence to support the actions it said it had taken. As a result, we did not change our ratings for the activities. In written comments (reprinted in appendix VIII), State agreed with our recommendation and described steps it said the agency is taking to implement the IT workforce planning activities. These steps included developing an IT strategic workforce plan that it expected to finalize by the end of fiscal year 2019. Further, the agency stated that it had substantially implemented the report to agency leadership on progress in addressing the competency and staffing gaps activity, which we assessed as not implemented. As evidence, the agency stated that departmental leadership is briefed regularly on efforts made to address IT competency gaps. However, State did not provide supporting documentation for these activities. As a result, we did not change our rating for the activities. In written comments (reprinted in appendix IX), VA concurred with our recommendation. However, the agency said it believed that it had fully implemented each of the five IT workforce planning activities we rated as less than fully implemented. Specifically, With regard to establishing and maintaining an IT workforce planning process, VA stated that its Office of Information and Technology had fully implemented a workforce planning process, including developing and implementing strategies to address gaps in competencies and staffing. The agency submitted two documents as supporting evidence: the Office of Information and Technology’s Human Capital Management Recruitment Strategy, which we reviewed during our engagement and determined did not sufficiently address the criteria; and the Office of Information and Technology’s Human Capital Strategic Plan for fiscal years 2014 through 2020, a document that it had not previously provided to us. We reviewed this document but have questions we need to follow up on with VA to determine whether the agency has fully implemented the activity. As a result, we did not change our rating for this activity. With regard to developing strategies and plans to address gaps in competencies and staffing, VA stated that, for projected staffing gaps, it has developed initial plans for deploying internal employee growth mechanisms. In addition, the agency stated that, because it anticipates no authorized staffing growth for fiscal years 2020 and 2021, the primary focus of its workforce strategies will be on delivering IT services in a growing environment while experiencing no authorized staff growth. Further, the agency stated that, due to its low vacancy rate, its emphasis will change from filling gaps to sustaining services while controlling workforce attrition. While the actions described may be sufficient to fully implement the activity, VA did not provide documented plans to address projected staffing gaps; as a result, we did not change our rating for this activity. With regard to implementing activities that address gaps, the agency stated that its Office of Information and Technology Human Capital Management Recruitment Strategy outlines talent acquisition approaches leveraged within the office to address staffing gaps. We analyzed this document during our review and, as noted in our report, found that it identified actions taken to address the prior year’s gaps, but it did not provide documentation showing that VA had implemented strategies and plans to address projected staffing gaps. As a result, we did not change our rating for this activity. With regard to monitoring the agency’s progress in addressing competency and staffing gaps, the agency stated that it has fully implemented the activity because it believes it has fully implemented the aforementioned dependent activities. However, as previously stated, we did not change our ratings for the other activities based on information that VA provided. Accordingly, we did not change our rating for this activity. With regard to reporting to agency leadership on progress in addressing competency and staffing gaps, VA stated that, in June 2019, its Office of Information and Technology briefed the agency’s Chief Information Officer and senior leadership on the preliminary results of data collection that is expected to ultimately result in a staffing model which accurately depicts the current array of the office’s workforce, requirements to perform the mission, functions, task assigned, and the associated staffing gap. However, the agency did not provide documentation supporting this activity. As a result, we did not change our partially implemented rating designation for the activity. In written comments (reprinted in appendix X), GSA agreed with our recommendation and stated that it has established a project team to implement the remaining workforce planning activities. In comments provided via email on September 12, 2019, the Liaison to GAO in NSF’s Office of the Director, Office of Integrative Activities, stated that the agency agreed with our recommendation. The liaison added that NSF had recently completed an iteration of an IT workforce plan that is to inform its processes going forward, and address many of the IT workforce planning activities. The liaison also stated that NSF recognizes the importance of IT workforce planning and will continue to implement improvements to its processes in this area. OPM provided written comments (reprinted in appendix XI) in which the agency stated that it concurred with the recommendation. In addition, the agency stated that, to address its shortcomings, it has partnered with GSA’s IT Modernization Center of Excellence to assess the current state of its IT workforce planning activities. The agency stated that this effort is intended to assist with identifying and addressing gaps. In its written comments (reprinted in appendix XII), SBA agreed with the recommendation. The agency stated that its Office of Human Resource Solutions and the Office of the CIO will continue unified efforts to fully implement the remaining seven key IT workforce planning activities noted in our report. SBA added that it expects to complete the efforts by the end of fiscal year 2021. SBA also provided technical comments which we incorporated, as appropriate. SSA provided written comments (reprinted in appendix XIII) in which it agreed with the recommendation. The agency stated that it planned to finish developing an IT Workforce Strategy by the end of fiscal year 2019, which is to provide a framework to address its future IT workforce needs. In addition, the agency stated that, in fiscal year 2020, it expects to begin implementation of activities to address our findings. SSA also provided technical comments which we incorporated, as appropriate. In written comments (reprinted in appendix XIV), USAID stated that it concurred with the recommendation. The agency said that it was taking actions to fully implement each of the seven IT workforce planning activities that we identified as not fully implemented. USAID added that it expects to complete these actions by the end of the first quarter of fiscal year 2021. One agency—Education—partially agreed with the recommendation. Specifically, in written comments (reprinted in appendix XV), Education stated that it has taken actions to address the workforce planning activities. For example, with regard to the assess competency and staffing needs regularly activity, the agency stated that, in fiscal years 2018 and 2019, it conducted assessments of competency and staffing needs for employees coded as cybersecurity employees. However, the agency did not provide supporting documentation, including documentation showing that it had assessed or updated competency needs since they were originally developed. As a result, we did not change our rating for the activity. For the assess gaps in competencies and staffing activity, Education stated that it conducted a two-part competency assessment of all employees with cybersecurity responsibilities in March 2019. However, the agency did not provide documentation of the assessment. As a result, we did not change our rating for the activity. With regard to developing strategies and plans to address gaps in competencies and staffing, Education stated that, in April 2019, it submitted to OPM its action plan to address competency and staffing gaps identified in its Cybersecurity Work Roles of Critical Need report. However, the agency did not provide documentation of the plan. As a result, we did not change our rating for the activity. In addition, the agency described its planned efforts to fully implement the remaining IT workforce planning activities, including developing an IT workforce planning process and monitoring and reporting on progress in addressing competency and staffing gaps. Three agencies commented on our findings but did not state whether they agreed or disagreed with our recommendations: In comments provided via email on September 6, 2019, the Director of Strategic Planning, Policy, E-government and Audits in Agriculture’s Office of the CIO stated that the agency concurred with our findings. In addition, the agency provided technical comments, which we have incorporated in the report as appropriate. In comments provided via email on August 26, 2019, an official from Justice’s Office of the CIO stated that the agency concurred with our findings. In comments provided via email on September 5, 2019, the GAO liaison coordinator for EPA’s Office of Mission Support provided comments on the findings. The agency stated that, in April 2019, it submitted two action plans to address Cybersecurity Work Roles of Critical Need to OPM which it believes address the eight IT workforce planning activities. For example, with regard to the establish and maintain a workforce planning process activity, the agency stated that the workforce action plans present a model on how the agency plans to fill critical needs related to IT and application project management, and information systems security. While the action plans describe efforts to be performed to address gaps for specific work roles of critical need, they do not describe an overall IT workforce planning process for the agency, to include how the agency will continue to develop its competency and staffing requirements, assess for gaps, and develop strategies and plans to address the gaps. As a result, we did not change our rating for the activity. Further, with regard to the remaining workforce planning activities, the agency stated that the action plans, which it had not previously provided during the course of our review, include actions and milestones focusing on evaluating skill gaps and assessing current training and development opportunities. However, the agency did not provide documentation of the underlying IT competency requirements or competency gap assessments used to identify the gaps. As noted in our report, if an agency has not developed competency requirements, it is not able to implement the subsequent activities relating to competencies. On the other hand, the agency has developed staffing requirements, and as a result we have updated our rating for the staffing evaluation criteria within the develop strategies and plans to address gaps in competencies and staffing activity. However, EPA did not provide documentation showing that it had implemented the strategies and plans to address staffing gaps, or monitored and reported on progress in addressing staff gaps. As a result, we did not change our ratings for these activities. One agency did not agree with our findings. Specifically, in its written comments (reprinted in appendix XVI), NRC stated that it did not agree with the findings that it had not developed an IT workforce planning process or IT competency requirements. With regard to the IT workforce planning process, we noted in our report that NRC had developed a workforce planning process that addressed all the key IT workforce planning activities; however, we stated that the process did not define the Chief Information Officer’s roles and responsibilities for implementing the activities or how the plan aligns with mission goals and objectives. In its response, the agency stated that its Management Directive 9.22, which was not provided to us during our review, defines the Chief Information Officer’s roles and responsibilities for implementing activities, including workforce planning by developing and maintaining the agency’s IT/Information Management Strategic Plan and enterprise IT/Information Management roadmap in alignment with the NRC Strategic Plan, and reviewing all positions with IT responsibilities requested in the budget request to ensure the positions meet the ongoing requirements of the agency. We reviewed the directive and determined that it addresses the Chief Information Officer’s roles and responsibilities. In addition, NRC identified parts of its workforce planning process, that it believes addresses alignment with mission and goals. We reviewed these parts, and agree with NRC that the plan addresses alignment with mission and goals. We have incorporated the change into this report, including changing the rating from partially implemented to fully implemented for this activity. As a result, we modified the recommendation from fully implementing eight activities NRC did not implement to fully implementing seven activities it did not fully implement. With regard to developing competency requirements, the agency stated that it specifies competencies for all IT positions in its position descriptions. However, NRC did not provide documentation of the position descriptions or the related competencies. As a result, we are not changing our not implemented rating for this activity. NRC also noted that it has joined other federal agencies to develop career paths and competency models for 64 IT security roles across the federal government, and that this effort is scheduled to be completed in October, at which time the agency will decide which of the models to adopt. In addition, the following two agencies to which we made recommendations in prior reports provided comments. In its written comments (reprinted in appendix XVII), Defense stated that it concurred with the overall contents of the report. In comments provided via email on September 5, 2019, an official from Treasury’s Office of the CIO stated that the agency agreed with all but two of our findings in this report, associated with three of the activities. First, the agency disagreed with our finding that it minimally implemented the establish and maintain a workforce planning process activity, stating that it has a department-wide workforce planning process that includes the IT workforce. However, while the agency issued a policy in 2013, which we reviewed during our engagement, that directs bureaus to annually conduct workforce planning, it did not define a process for doing so. In addition as we further note, in 2018, the agency issued guidance addressing workforce planning issues for bureaus to consider in developing their own processes. However, this does not constitute an IT workforce planning process. Since Treasury did not provide any additional evidence of an IT workforce process, we are not changing our rating for this activity. Second, Treasury disagreed with our finding that it did not implement the activities associated with monitoring and reporting on its progress in addressing competency and staffing gaps. Specifically the agency stated that it has designed and begun implementing a new governance structure for workforce management that reinforces the monitoring and reporting of workforce related issues to agency leadership during quarterly performance reviews. However, as we note in our report, the monitoring and reporting activities are dependent on the developing strategies and plans to address competency and staffing gaps activity which Treasury has yet to implement. Until Treasury develops such strategies and plans, it cannot monitor and report on their progress. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-4456 or HarrisCC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XVIII. Appendix I: Objective, Scope, and Methodology Our objective was to examine the extent to which federal agencies are effectively implementing information technology (IT) workforce planning activities. To address this objective, we relied on practices from GAO’s IT workforce planning framework as criteria. The framework identifies eight key IT workforce planning activities that, when effectively implemented, can facilitate the success of major acquisitions. These activities are listed in table 2. To ensure consistent understanding and application of the activities in our evaluations, we reviewed the supporting laws, policy, and guidance for each activity and identified specific evaluation criteria. The criteria are listed in table 3. We reviewed IT workforce planning policies and other workforce planning documentation for each of the 24 Chief Financial Officers Act of 1990 agencies, including workforce planning processes; competency requirements; annual mission critical occupation resource charts required by the Office of Personnel Management (OPM) which document staffing requirements and gap assessments; strategies and plans to address gaps; and reports on progress in addressing gaps. For the six agencies for which we previously performed IT workforce planning assessments, we reviewed the previously reported information and obtained and analyzed updates, as appropriate. We compared the information obtained to our evaluation criteria and identified gaps and their causes. We also interviewed cognizant officials from each of the 24 agencies, to discuss their implementation of the IT workforce planning activities and causes for any gaps. Our review focused on the agency’s IT workforce planning efforts at the agency level, including the extent to which the agency maintained visibility and oversight into component-level IT workforce planning. Based on our assessment of the documentation and discussions with agency officials, we assessed each agency’s implementation of our evaluation criteria as: fully implemented—the agency provided evidence which showed that it fully or largely addressed the elements of the criteria. partially implemented—the agency provided evidence that showed it had addressed at least part of the criteria. not implemented—the agency did not provide evidence that it had addressed any part of the criteria. To determine an overall rating for each of the eight key workforce planning activities, we summarized the results of our assessments of the evaluation criteria. Specifically, we assessed each activity as: fully implemented—the agency fully implemented both of an activity’s evaluation criteria. substantially implemented—the agency fully implemented one of an activity’s evaluation criteria and partially implemented the other evaluation criteria. partially implemented—the agency fully implemented one of an activity’s evaluation criteria and did not implement the other criteria, or partially implemented both of an activity’s evaluation criteria. minimally implemented—the agency partially implemented one of an activity’s evaluation criteria and did not implement the other evaluation criteria. not implemented—the agency did not implement either of an activity’s evaluation criteria. We assessed the staffing evaluation criteria for the develop competency and staffing requirements, assess competency and staffing needs regularly, and assess gaps in competencies and staffing activities as fully implemented if agencies provided evidence of a complete mission critical occupation resource chart to meet OPM reporting requirements and were able to demonstrate that the mission critical staff represented most or all of their IT workforce. In addition, we assessed the competency evaluation criteria for these activities as fully implemented if agencies provided evidence that they performed them for most or all of their IT workforce. For the implement activities that address gaps activity, we assessed agencies as having fully implemented the evaluation criteria on other efforts if they provided evidence as having implemented at least four of the efforts identified in the Federal Information Technology Acquisition Reform Act (FITARA). We rated this evaluation criteria as partially implemented if agencies provided evidence of having implemented fewer than four of the efforts. Finally, in making our assessments, we also considered the extent to which an agency had implemented prerequisite activities. For example, to implement the competency evaluation criteria for the develop strategies and plans to address gaps activity, the agency needed to have also implemented the competency evaluation criteria for the assess gaps in competencies and staffing activity. We did not assess any activity higher than the prerequisite activity. We also determined if there was a common factor which led to the rating for a particular activity. For example, we noted whether most agencies partially implemented an activity because they had fully implemented one of the evaluation criteria, but had not implemented the other criteria. To determine the reliability of staffing data in the mission critical occupation resource charts, we reviewed the charts for obvious errors and for completeness and obtained clarification from agencies on identified errors. We determined that the data were sufficiently reliable for the purpose of this report, which was to determine the extent to which agencies had implemented the key activities. We conducted this performance audit from January 2018 to October 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Agencies’ Implementation of Key IT Workforce Planning Activities This appendix contains assessments of the extent to which the 24 Chief Financial Officers Act of 1990 agencies implemented each of the eight key IT workforce planning activities identified in GAO’s information technology (IT) workforce planning framework. Appendix III: Comments from the Department of Energy Appendix IV: Comments from the Department of Housing and Urban Development Appendix V: Comments from the Department of Homeland Security Appendix VI: Comments from the Department of the Interior Appendix VII: Comments from the Department of Labor Appendix VIII: Comments from the Department of State Appendix IX: Comments from the Department of Veterans Affairs Appendix X: Comments from the General Services Administration Appendix XI: Comments from the Office of Personnel Management Appendix XII: Comments from the Small Business Administration Appendix XIII: Comments from the Social Security Administration Appendix XIV: Comments from the United States Agency for International Development Appendix XV: Comments from the Department of Education Appendix XVI: Comments from the Nuclear Regulatory Commission Appendix XVII: Comments from the Department of Defense Appendix XVIII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following staff made key contributions to this report: Sabine Paul (Assistant Director), Scott Borre (Analyst in Charge), Rebecca Eyler, Cassaundra Pham, Thomas B. Rackliff, and Marshall Williams, Jr.
Why GAO Did This Study The federal government annually spends over $90 billion on IT. Despite this large investment, projects too frequently fail or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. Effectively implementing workforce planning activities can facilitate the success of major acquisitions. GAO was asked to conduct a government-wide review of IT workforce planning. The objective was to determine the extent to which federal agencies effectively implemented IT workforce planning practices. To do so, GAO compared IT workforce policies and related documentation from each of the 24 Chief Financial Officers Act of 1990 agencies to activities from an IT workforce planning framework GAO issued. GAO rated each agency as having fully, substantially, partially, minimally, or not implemented for each activity. GAO supplemented its reviews of agency documentation by interviewing agency officials. What GAO Found Federal agencies varied widely in their efforts to implement key information technology (IT) workforce planning activities that are critical to ensuring that agencies have the staff they need to support their missions. Specifically, at least 23 of the 24 agencies GAO reviewed partially implemented, substantially implemented, or fully implemented three activities, including assessing gaps in competencies and staffing. However, most agencies minimally implemented or did not implement five other workforce planning activities (see figure). Agencies provided various reasons for their limited progress in implementing workforce planning activities, including competing priorities (six agencies), and limited resources (three agencies). Until agencies make it a priority to fully implement all key IT workforce planning activities, they will likely have difficulty anticipating and responding to changing staffing needs and controlling human capital risks when developing, implementing, and operating critical IT systems. What GAO Recommends GAO is making recommendations to 18 of the 24 federal agencies to fully implement the eight key IT workforce planning activities. Of the 18 agencies, 13 agreed with the recommendations, one partially agreed, three neither agreed nor disagreed, and one disagreed with the findings and provided evidence which led to a modification to its recommendation, as discussed in this report. For all of the remaining recommendations, GAO continues to believe that they are all warranted.
gao_GAO-19-608
gao_GAO-19-608_0
Background NNSA’s Organization and Its Process to Oversee Its Professional SSCs NNSA uses professional SSCs in its program offices, headquarters offices, and field offices. Program offices plan and oversee NNSA’s numerous programs and projects and are generally responsible for integrating activities across the agency. NNSA’s program offices are: Defense Programs, Safety, Infrastructure, and Operations, Defense Nuclear Security, Counterterrorism and Counterproliferation, and Naval Reactors. Headquarters offices generally provide leadership, develop policy and budgets, or provide other functional support across NNSA. The headquarters offices include the offices of: Acquisition and Project Management, Cost Estimating and Program Evaluation, Information Management and Chief Information Officer, Management and Budget, and Policy. NNSA also has seven field offices across the country. The field offices are responsible for overseeing NNSA’s management and operating (M&O) contractors, including ensuring compliance with federal contracts. To provide oversight of the M&Os, each field office employs subject matter experts in areas such as emergency management, physical security, cybersecurity, safety, nuclear facility operations, environmental protection and stewardship, radioactive waste management, quality assurance, business and contract administration, public affairs, and project management. NNSA’s field offices are generally located at the sites they oversee. NNSA’s field offices are: Kansas City Field Office in Missouri, Livermore Field Office in California, Los Alamos Field Office in New Mexico, Nevada Field Office, NNSA Production Office in Tennessee and Texas, Sandia Field Office in New Mexico, and Savannah River Field Office in South Carolina. After an office determines that it has an unmet work need, officials are to consult an agency document that outlines the procedures to determine whether to hire a federal employee or use another hiring option, such as an SSC, to meet the office’s need. If, upon consulting the document, officials determine that an SSC is appropriate for their needs, they are then required to contact a representative from NNSA’s Office of Acquisition and Project Management to begin the procurement process. This office is responsible for acquisition support and contracting oversight for the agency throughout the acquisition lifecycle. NNSA’s Office of Management and Budget also has responsibilities for SSCs through, among other things, assisting offices in determining the appropriate funding source for contracts and providing advice on the development of performance work statements. Performance work statements provide a clear description of the activities that the contractor is expected to undertake and how the contractor’s performance will be assessed. NNSA guidance describes the performance work statement as the most important document in a procurement package, as the performance work statement is considered to be the binding agreement under which the contractor must perform. In addition, officials must submit a selection justification form to NNSA’s Office of Management and Budget for approval. In 2012, NNSA implemented the use of a form specific to SSCs—referred to as a determination form—to help mitigate the risk of awarding SSCs for activities that must be performed by federal employees. The form includes a series of questions to help officials from the office that plans to use the SSC and contracting officers to identify inherently governmental functions when reviewing a performance work statement. According to the determination form, if functions contemplated are closely associated with inherently governmental functions, an official must determine that NNSA has sufficient capacity to give special management attention to the contractor’s performance to preclude unauthorized personal services. If the support needed includes inherently governmental functions, the agency would not procure the service by contract. After officials confirm the services to be procured do not include work that must be performed by federal employees, officials sign the determination form to indicate that they have sufficient capacity and capability to, among other things, give special management attention to contractor performance, and include the completed form in the contract file. Once an SSC is awarded, NNSA relies on certain key personnel in various offices to oversee the contractor’s performance and ensure that the contractors comply with the terms of a contract. These include: Contracting officers. Contracting officers work within NNSA’s Office of Acquisition and Project Management and have the authority to enter into, administer, and terminate contracts. Contracting officers, along with program office officials, are responsible for determining the level of risk associated with a contract. Further, as part of the acquisition process, the office that identified the need for the SSC works with a contracting official to develop the performance work statement. Contracting officer’s representatives (COR). CORs are nominated by the program office and approved by the contracting officer. CORs are authorized representatives of the contracting officer and have the primary responsibility of overseeing the contractor, assessing performance, accepting deliverables, and reviewing invoices. Task monitors. Normally assigned by a program office, task monitors assist the COR with oversight of contractor performance. During the life of a contract, contracting officers and CORs regularly monitor contractors’ performance to ensure the contractors are complying with the terms of the contract. This monitoring varies across contracts and can include, for example, reviewing the contractor’s monthly invoices or reports and conducting formal annual evaluations. The monitoring activities can also vary based on the types of tasks included in the contract. For example, a contract requiring advanced technical analysis may warrant monitoring that is different from a contract that requires office administrative support. This difference is because the former is a more complex task that may include the review and approval of technical reports or other deliverables. Contracts for office support may not generate such deliverables. NNSA’s Funding Sources for SSCs NNSA uses three appropriations accounts—or funding sources—to fund its SSCs. The first is NNSA’s Federal Salaries and Expenses appropriations account. Funding from this account is also referred to as program direction funding in NNSA’s annual budget justification materials. This account is generally used to pay for costs associated with NNSA’s federal employees, such as salaries, benefits, travel costs, and training, regardless of whether those federal employees work in headquarters, program, or field offices. The annual congressional budget justification materials define the Federal Salaries and Expenses account as used mostly to support the federal workforce. NNSA also uses this account to fund SSCs personnel who provide advice and assistance to a federal employee or in lieu of a federal employee. Because Federal Salaries and Expenses is the appropriations account used for most costs associated with federal employees, the amount of appropriations for this account helps determine the size of NNSA’s federal workforce. In addition, NNSA is subject to a statutory FTE cap on the total number of NNSA employees for each fiscal year. Congress and the President established a statutory cap in fiscal year 2013 that limited the total number of NNSA employees to up to 1,825 by October 1, 2014, and decreased that number in fiscal year 2015 to up to 1,690, where the number remains. NNSA can exceed the number of FTEs in the cap by submitting to the congressional defense committees a report justifying such excess. The other two sources NNSA uses to fund its SSCs are NNSA’s Weapons Activities and Defense Nuclear Nonproliferation appropriations accounts. Funding from these two accounts is referred to in NNSA’s annual congressional budget justification materials as program funding. Weapons Activities account. NNSA uses the Weapons Activities appropriation account to fund programs that provide for: (1) the maintenance and refurbishment of nuclear weapons to continue sustained confidence in their safety, reliability, and performance; (2) the investment in scientific, engineering, and manufacturing capabilities for certification of the enduring nuclear-weapons stockpile; and (3) the manufacture of nuclear weapon components. This account is also used to fund program offices other than the Office of Defense Nuclear Nonproliferation and Naval Reactors. Defense Nuclear Nonproliferation account. NNSA uses the Defense Nuclear Nonproliferation appropriation account to fund programs: (1) that provide, for example, policy and technical leadership to prevent or limit the spread of materials, technology, and expertise related to weapons of mass destruction; (2) that develop technologies to detect nuclear proliferation; (3) that secure or eliminate inventories of nuclear weapons-related materials and infrastructure; and (4) that ensure a technically trained emergency- management response is available both domestically and worldwide to nuclear and radiological incidents. Table 1 provides information on the three funding sources and the types of SSCs funded with each source. Government-Wide Reviews and Internal Studies of SSCs In recent years, we have reported concerns with federal agencies’ use of SSCs. In December 2011, we found that while agencies increasingly relied on contractors to provide professional and management support services, agencies generally did not consider and mitigate risks of acquiring such services, including the risk that contractors may inappropriately influence the government’s authority, control, and accountability for inherently governmental decisions. Additionally, in September 2018, we found that contracts requiring increased management attention, such as contracts for professional and management support services, have posed contractor oversight challenges for federal agencies. In that report, we found that there was an increased risk that contractors may perform tasks reserved for the government under contracts like those for management support services. We also found that the Office of Management and Budget (OMB) had taken steps to help agencies reduce some of the risks associated with contracts warranting increased management attention. For example, in September 2011, OMB’s Office of Federal Procurement Policy issued a policy letter to executive agencies to provide guidance on managing the performance of contractors performing work that is closely associated with inherently governmental and critical functions. The letter directed agencies to employ and train a sufficient number of qualified government personnel to provide active and informed management and oversight of contractors’ performance where contracts have been awarded for functions closely associated with the performance of inherently governmental functions. The September 2011 policy letter also provided guidance intended to clarify when governmental outsourcing for services is and is not appropriate. The letter identifies the need to increase management attention to using federal employees when functions that generally are not considered to be inherently governmental approach being in that category because of the nature of the function and the risk that performance may impinge on a federal official’s performance of an inherently governmental function. In addition, the policy letter calls for agencies to ensure that they have sufficient internal capability to control their missions and operations for managing critical functions. In 2013, NNSA’s Office of Defense Programs conducted an internal review of its use of nonfederal personnel to accomplish its missions. The study resulted in nine recommendations related to SSCs, including: developing policy on when to use each of the funding sources for SSCs and policy and guidelines on roles and responsibilities for federal employees; providing training for all NNSA employees on the proper use and management of SSCs; and evaluating current practices for the appearance of inherently governmental functions and terminating any inappropriate practices. As of July 2019, NNSA officials said the agency was working to finalize a policy on when to use each of the funding sources for SSCs. To address the recommendations on the two latter issues, NNSA developed training and guidance documents intended to assist staff in managing and working alongside contractors’ staff. Specifically, with regard to training, NNSA developed training for all NNSA’s federal employees to ensure that those employees understand the role of SSCs in the offices. This training covers, among other things, appropriate behavior and activities for federal staff who work alongside contractor personnel. With regard to guidance, NNSA developed documents that explain appropriate interactions with contractor personnel. For example, NNSA prepared a tip sheet for all staff to assist with maintaining proper relationships with SSC personnel; the tip sheet includes respecting the relationship between a contractor and its employees. NNSA also developed a contracting guide in 2014 that provides information on requirements, policies, and procedures, and that covers contracts for different purposes, including SSCs. The guide also includes descriptions of inherently governmental functions. In addition, NNSA’s Office of Management and Budget prepared a memorandum in 2014 for NNSA’s program offices to clarify the process for approving the funding source for SSCs. A July 2015 DOE Inspector General review of NNSA’s use of SSCs also found potential issues with the management of SSCs, particularly related to contractors’ performance of inherently governmental functions. For example, the review found that half of the 20 contracts in its sample included contracted services that approached being inherently governmental. The Inspector General’s review reiterated the recommendations in the Office of Defense Programs’ study and recommended that NNSA track the corrective actions to respond to the recommendations in that study to their completion. According to agency officials and documentation, NNSA has been tracking progress on these recommendations. In 2018, NNSA completed two workforce studies related to its use of SSCs. A joint workload and organizational analysis by NNSA and the Office of Personnel Management reviewed all program offices’ current workloads and federal staffing levels to assess the workforce needs to execute NNSA’s missions. The analysis concluded that NNSA did not have enough federal personnel to meet its mission requirements and called for an increase in the number of federal government employee FTEs by 238 over the agency’s current statutory cap of up to 1,690, for a total of 1,928. The analysis also concluded that the need for additional federal FTEs was driven, in part, by new mission requirements. NNSA’s Office of Cost Estimating and Program Evaluation also conducted an assessment of the number of federal personnel and contractors’ FTE personnel working on SSCs within each of NNSA’s program offices, as well as the appropriate workforce balance between federal and contractor FTEs, among other things. This assessment concluded that NNSA should rebalance its workforce by increasing the number of federal personnel to meet current and future missions. NNSA’s fiscal year 2020 budget justification materials request 1,753 federal FTEs, an increase of 63 FTEs over the current cap, in order to meet its mission requirements. In our March 2019 High-Risk Update, we stated that Congress should consider working with NNSA to ensure that the statutory cap on staffing is re-examined and is consistent with NNSA’s human capital needs, as evaluated in these two studies. NNSA Increasingly Used SSCs in Fiscal Years 2010 through 2018 Primarily because of Increased Appropriations and Workload and a Decrease in Authorized Federal Staff NNSA Increasingly Used SSCs in Fiscal Years 2010 through 2018 for a Variety of Functions NNSA increasingly used professional SSCs for a variety of functions in fiscal years 2010 through 2018. Specifically, based on our analysis of data from FPDS-NG, NNSA’s obligations for SSCs increased from about $139 million in fiscal year 2010 to about $193 million in fiscal year 2018 (see fig. 1). This is an increase of $54 million, or nearly 40 percent, in current dollars. The largest increase in NNSA’s obligations for SSCs occurred from fiscal year 2013 to 2014 when obligations for SSCs increased by about $26 million in current dollars—or about 16 percent, when adjusted for inflation to constant 2018 dollars. As discussed previously, in fiscal year 2013, Congress established a cap on the number of NNSA’s federal FTEs of up to 1,825 by October 1, 2014. After declining from a high of nearly 200 contracts in fiscal year 2010 to 160 in fiscal year 2011, the number of contracts did not fluctuate as much from fiscal years 2011 through 2018. NNSA used SSCs in nearly all of its offices in recent years (see table 2). The Offices of Defense Programs, Acquisition and Project Management, and Defense Nuclear Security together accounted for more than half of the FTE contractor personnel funded through professional SSCs in fiscal years 2015 through 2018. To understand how NNSA used these SSCs, we analyzed the product service codes associated with each of the SSCs. NNSA categorizes each of its SSCs using product service codes that provide some information on the types of tasks to be performed under the contract. NNSA identified 77 codes that define its professional SSCs when it started reporting information on SSCs in its congressional budget justification materials. These codes are arranged in five broad categories: (1) information technology and telecommunications support; (2) environmental consulting and legal support; (3) professional support; (4) administrative support; and (5) management support. Within each category, there are codes for specific activities, as well as a code for “other” support. For example, within the administrative support category, there are specific codes for word processing/typing, paper shredding, and transcription, and there is a separate code for “other” administrative services that is for tasks that do not fit within the other codes. According to several contracting officers and CORs we interviewed, officials try to select the code that best addresses all of the tasks included in the contract; however, most SSCs encompass a variety of tasks, so contracting officers often select the “other” category. Further, according to officials, if NNSA awards a task order under an existing contract, the task order has the same product service code as was assigned to the existing contract. As shown in figure 2, based on our analysis of FPDS-NG data, NNSA used three of the 77 product service codes—other professional services, engineering and technical services, and other administrative services—for more than 80 percent of its obligations to SSCs in fiscal year 2018. Because the product service codes encompass a wide range of activities, we reviewed the performance work statements for the 12 contracts in our sample to gain a greater understanding of the types of activities these codes may represent. The 12 contracts in our sample used five product service codes. Within those five product service codes, activities in the performance work statements for the 12 contracts in our sample include: Other professional services. Budgeting and evaluation analyses, technical support in training emergency response personnel, technical assessments and reviews, and policy analysis. One performance work statement included managing and maintaining databases, statistical analysis of budgetary data for decision makers, and programmatic assessments of data management systems for various programs. Engineering and technical services. Feasibility studies, acquisition planning, analysis of technical alternatives, project planning, risk analysis, general design support, and document preparation. One performance work statement included providing technical training support to the training program manager in a field office. Other administrative services. Analyzing the economic aspects of foreign nuclear programs, analyzing and producing reports on nuclear security issues in one region, processing correspondence, and data entry. One performance work statement included providing administrative and clerical support for functions such as responding to freedom of information act inquiries and providing support for training procurement, development, and evaluation. Other management support services. Providing technical coordination and document-editing services and reviewing, assessing, and linking government requirements to project documents. One performance work statement included support for maintaining an effective security program, including revising both federal and contractor sites’ requirements and procedures for two facilities and the field office. Program management and support services. Providing technical and advisory assistance in the design, construction, and operation of NNSA facilities for a certain program, technical evaluations, and technical and analytical support. One performance work statement included expert technical and advisory assistance related to the design, construction, and operation of facilities related to a certain program, including working with M&O contractors in engineering, equipment fabrication, construction, and tests. NNSA Officials Attributed Increased Use of SSCs to Increases in Available Appropriations and Workload and a Decrease in Authorized Federal Staff According to NNSA officials, NNSA increased its use of SSCs in fiscal years 2010 through 2018 due to: (1) increases in appropriations under the Weapons Activities appropriations account for additional work and (2) a decrease in the number of authorized federal employee FTEs due to a decrease in the statutory cap from fiscal years 2014 to 2015. First, as shown in figure 3, NNSA’s total appropriations increased from about $9.9 billion in fiscal year 2010 to $14.7 billion in fiscal year 2018 in current dollars. The increase in NNSA’s appropriations occurred mainly in the Weapons Activities appropriations account, which increased from $6.4 billion in fiscal year 2010 to $10.6 billion in fiscal year 2018 in current dollars. During the same period, NNSA’s appropriations for Defense Nuclear Nonproliferation generally remained around $2 billion per fiscal year in current dollars, and appropriations for Federal Salaries and Expenses—which covers the costs of all federal employees, including those working on Weapons Activities and Defense Nuclear Nonproliferation programs—remained around $400 million per fiscal year in current dollars. The increases in appropriations for the Weapons Activities account generally reflect the increasing workload to modernize the nuclear weapons stockpile and its associated infrastructure, as described in the 2010 and 2018 Nuclear Posture Reviews. According to an official in the Office of Defense Programs, that office has increased its use of SSCs because of the increase in refurbishment activities in the nuclear stockpile. Similarly, the internal review by NNSA’s Cost Estimating and Program Evaluation office attributed the increase in NNSA’s use of SSCs since 2012 to an increase in appropriations through the Weapons Activities account. According to an official from that office, the increased appropriations were for additional work related to weapons refurbishment and infrastructure modernization. Second, according to several NNSA officials, offices have increasingly used SSCs because of a decline in federal FTEs. As figure 4 shows, the number of NNSA’s federal FTEs funded through the Federal Salaries and Expenses account decreased from 1,897 in fiscal year 2010 to 1,608 in fiscal year 2018, a decrease of 15 percent. According to an NNSA official, this decline in federal FTEs is due, in part, to the annual statutory cap on federal FTEs that was to be implemented by October 1, 2014. An official explained that, by using SSCs, program offices have been able to accomplish the agency’s missions while remaining under the cap. Although the number of NNSA’s federal FTEs has generally decreased since fiscal year 2010, the change in federal FTEs has differed across program offices. From fiscal years 2013 through 2018, the number of federal FTEs in offices that support programs funded through the Defense Nuclear Nonproliferation appropriations account decreased, while those that support programs funded through the Weapons Activities appropriations account increased. For example, as shown in table 3, federal FTEs in the Office of Defense Nuclear Nonproliferation decreased by 22 percent from fiscal years 2013 through 2018. In contrast, the number of federal FTEs in the Office of Defense Programs increased 4 percent during the time period. In general, the number of federal FTEs supporting Defense Nuclear Nonproliferation activities has decreased, while appropriations for that office’s activities have remained consistent. In contrast, appropriations for Weapons Activities account have increased substantially, while the number of federal FTEs supporting those activities has increased by about 1.5 percent. According to some NNSA officials, SSCs provide the agency with flexibility to address new work needs that are episodic or specialized. This has led NNSA offices to use SSCs more frequently with the increased available appropriations and workload for Weapons Activities while remaining within the statutory FTE cap. Information on SSCs in NNSA’s Budget Justification Materials Is Not Complete, and Some Information Is Not Fully Useful to Support Congressional Decision-making NNSA Reported Information on SSCs in Its Annual Congressional Budget Justification Materials Starting in fiscal year 2017, NNSA reported information on SSCs in its annual congressional budget justification materials, but the information was not complete because NNSA did not include data on (1) all of its professional SSCs or (2) the number of FTE contractor personnel who worked under an SSC for more than 2 years, as required by the fiscal year 2016 NDAA. Additionally, some of the information NNSA reported was not fully useful to support congressional decision-making because it did not present the cost of SSCs in terms of obligations for 1 fiscal year and did not identify the specific appropriations accounts used to fund SSCs. The NDAAs for fiscal years 2016 and 2017 require NNSA to report annually certain information on its use of SSCs in its congressional budget justification materials. NNSA reported information on its SSCs in its annual congressional budget justification materials for fiscal years 2017 through 2020, its most recent justification. Figure 5 shows an excerpt of the SSC information NNSA reported in its fiscal year 2020 annual congressional budget justification materials. NNSA obtained data for the first six columns of the information on SSCs reported in the fiscal year 2020 congressional budget justification materials from its accounting and contracting systems, called the Standard Accounting and Reporting System (STARS) and Strategic Integrated Procurement Enterprise System (STRIPES), respectively. The vendor name column identifies the name of the contractor performing the work. The contract number and order number columns provide the unique identifier that NNSA uses for the contract. If an SSC is a task order pursuant to an indefinite delivery/indefinite quantity contract, an order number is listed; otherwise the information is listed as “Unavailable.” The fund description column identifies the funding source for the contract—either (1) “Program” funding or (2) “FSE,” the latter of which represents SSCs funded through the Federal Salaries and Expenses appropriations account, which is also referred to as program- direction funding. In a few instances, the budget justification identifies the funding source as “both”—meaning both program funding and Federal Salaries and Expenses funding was combined to fund the contract. The “obligations to date” column provides the amount that NNSA has obligated on the contract since it was awarded. The “maximum value” column provides the total amount that could be obligated on the contract through the contract term and any options. NNSA collected the data on the number of FTE contractor personnel under each SSC—presented in the last column of figure 5—manually. Each year, the Office of Acquisition and Project Management requests information from contracting officers—in collaboration with program offices, CORs, and contractor staff, if needed—on the number of FTE contractor personnel working under contracts for professional SSCs. The information that the Office of Acquisition and Project Management provided to contracting officers states that each FTE represents 2,080 hours, each full-time employee is 1 FTE, and those who are less than full- time should be a portion of an FTE. According to NNSA officials, the agency uses this methodology for reporting FTE contractor personnel because the contracts do not require vendors to use a specific number of personnel to complete the work. Rather, the contractors determine the amount of labor needed to complete the scope of work under the contract. Information on SSCs in NNSA’s Budget Justification Materials Is Not Complete The information that NNSA reported on its professional SSCs in its annual congressional budget justification material was not complete because NNSA did not report information on all of its professional SSCs or on the number of FTE contractor employees who worked on the contract for more than 2 years, as required by the fiscal year 2016 NDAA. Reporting this information could provide some insight into how NNSA is using its SSCs and whether any of these contracts present increased risk for performance of personal services. Budget Justification Materials Do Not Include Information on All Professional SSCs Among other information, the NDAA for fiscal year 2016 required NNSA to include annually in its congressional budget justification materials a report on the number of its SSCs, as of the date of the report. Rather than report the number of SSCs, NNSA reported the names of vendors in its budget justifications. In its fiscal year 2017 congressional budget justification materials, NNSA reported the names of vendors but did not list the number of contracts it awarded to each vendor. In its congressional budget justification materials for fiscal years 2018 through 2020, NNSA reported the names of vendors and the contract number for each contract with a vendor. A count of the contracts included in NNSA’s annual congressional budget justification materials for this period showed NNSA used from 127 to 152 SSCs in fiscal years 2017 through 2020. NNSA officials involved with preparing the information included in the annual congressional budget justification materials said they made decisions on which SSCs to include and which to exclude based on the statutory language. According to these officials, because the requirements in the NDAA specified that NNSA was to report the data on the number of SSCs “as of the date of the report,” the officials interpreted that to mean they should only include contracts that were active on the date they queried their accounting and contracting databases. The officials said they excluded SSCs for which the contracts expired before NNSA officials prepared the information for the annual congressional budget justification materials. To prepare the information, the officials said that they obtained data on all contracts that were active on the day they queried the database, which was in mid- to late-October. The officials said that if a contract’s performance period ended prior to that date, they did not include the contract in the annual congressional budget justification materials, even if NNSA obligated funds to the contract in that year. For example, if a professional SSC reached the end of its 5-year term on September 15, 2018, that contract would not be included in NNSA’s reporting on SSCs for fiscal year 2018. However, according to the officials, information on the contract would have been included in the annual congressional budget justification materials in the 4 prior fiscal years. Although NNSA reported on SSCs that were active as of the date the officials queried the database in its congressional budget justification materials, this information is not complete because NNSA did not report information on all of the professional SSCs to which it obligated funds in those years. According to our analysis of data from FPDS-NG, NNSA excluded from 31 to 42 contracts each year from its annual congressional budget justification materials for fiscal years 2017 through 2020. These unreported contracts accounted for from about $10 million to $31 million in obligations for SSCs each year, as shown in table 4. The SSCs NNSA reported in the annual congressional budget justification materials align with the reporting requirements in the NDAAs for fiscal years 2016 and 2017. However, this information does not provide complete information on the number of SSCs that NNSA used or for which the agency obligated funds at some point during the fiscal year and does not disclose which contracts were excluded. For each SSC that NNSA excludes from its annual congressional budget justification materials, Congress does not have information, such as the amount obligated, number of FTE contractor personnel, or funding source— information that could assist congressional decision-making about NNSA’s workforce and annual appropriations levels. By reporting information on all professional SSCs to which funds were obligated during the fiscal year in its annual congressional budget justification materials, NNSA could provide more complete information on the number of SSCs used to meet mission requirements, assisting Congress in making better informed decisions about NNSA’s annual appropriations levels. Budget Justification Materials Do Not Include the Number of Contractor FTE Employees Working under Each Contract for More Than 2 Years The NDAA for Fiscal Year 2016 requires NNSA to report annually in its congressional budget justification materials on the number of FTE contractor personnel who have been working under each SSC for more than 2 years. NNSA did not provide this information in its annual congressional budget justification materials for fiscal years 2017 through 2020 because, according to the budget justification materials, NNSA does not collect information on individual contractor personnel from vendors. Specifically, NNSA included statements in its annual congressional budget justification materials for fiscal years 2017 through 2020 that the agency does not have information to address this requirement and that it is the responsibility of each individual contractor to determine who will perform the scope of work required by the terms and conditions of each contract. According to NNSA’s Office of the General Counsel, NNSA does not collect information on an individual contractor’s personnel because the vendor—not NNSA—is the employer for contractor’s employees and NNSA does not want to appear as if the agency is also their employer. Additionally, NNSA officials said that the agency does not have access to the personnel systems of its vendors and would not have information on whether contractor personnel worked on a contract for more than 2 years available to include in the annual congressional budget justification materials. NNSA officials also stated that they do not want to collect the names of individual contractors, although the NDAA for fiscal years 2016 and 2017 do not require NNSA to collect or report the names of individual contractor personnel working on contracts for more than 2 years. NNSA officials currently have access to information, such as employee badge records and office organizational charts, that can be used to develop notional, or approximate, information on the number of FTE contractor personnel who have worked on an SSC for more than 2 years. For example, we reviewed current organizational charts for several NNSA organizations that included the names of SSC personnel. Additionally, NNSA officials said that they could require vendors to track and report data on FTE contractor personnel assigned to an SSC for more than 2 years to NNSA on an annual basis. However, in addition to raising concerns about the perception of being a co-employer of the contractor personnel, the officials said that this additional requirement could increase contract costs and could be an administrative burden for NNSA and the contractors. Further, NNSA officials said it would be difficult to obtain the FTE data from vendors because, among other things, vendors’ methods for calculating FTE contractor personnel may vary from contract to contract and contractor personnel may work on a contract for only part of the year. The officials said the information would therefore need to be caveated significantly and may not be reliable. We understand the challenges in collecting the information; however, Congress has not modified or eliminated this reporting requirement in statute. In addition, the FAR identifies one element that may indicate a personal services contract as a service that can reasonably be expected to last more than 1 year. In a July 2015 report, the DOE’s Inspector General identified 14 contracts out of its sample of 20 that exhibited one or more characteristics of a personal services contract. According to the report, this situation could lead observers to question NNSA’s management of its SSCs, although the report did not find any clear violations. The report also stated that the Office of Defense Programs’ self-assessment found that many contractor employees appeared to be assigned to particular organizations for multiple years. However, NNSA cannot know the number of FTE contractor personnel who have been working under each SSC for more than 2 years because it does not collect this information. By collecting the information as required by law, NNSA could provide Congress—as well as its own decision makers—with greater insight into how NNSA is using its SSCs, including whether these SSCs display any of the characteristics of personal services contracts. Some Information on SSCs in NNSA’s Congressional Budget Justification Materials Is Not Fully Useful to Support Congressional Decision-making NNSA reported information on obligations and funding sources used for SSCs in its annual congressional budget justification materials for fiscal years 2018 through 2020. However, some of the information is not fully useful to support congressional decision-making because it presents obligations for SSCs over multiple fiscal years, instead of presenting such obligations annually, and does not identify the specific program’s appropriation accounts, such as Weapons Activities and Defense Nuclear Nonproliferation, used to fund the contracts, as required by the fiscal year 2017 NDAA. Congressional Budget Justification Materials Present Obligations over Multiple Fiscal Years The NDAA for fiscal year 2017 directs NNSA to report annually in its congressional budget justification materials on the cost of each SSC, as of the date of the report. According to NNSA officials who prepared the information, in the absence of specific guidance from Congress on the information to report, NNSA reported the obligations to date and the maximum value for each contract in its annual congressional budget justification materials for fiscal years 2018 through 2020 (see fig. 5). According to NNSA officials, the obligations-to-date column in the annual congressional budget justification materials represents the cumulative obligations on each contract from when it was awarded through the October prior to the submission of the materials, and the maximum value column represents the maximum amount that NNSA can obligate on the contract over the contract’s base term and any options. NNSA officials told us they reported the obligations to date and maximum value of the contracts because they determined that these measures met the definition for reporting information on the cost of the contracts, as required by the NDAA. According to the officials, they determined that obligations by fiscal year did not provide the total cost of an SSC because NNSA obligates funds on SSCs over multiple years, but the officials could provide obligations data by fiscal year if directed by Congress to do so. Additionally, NNSA officials said that the NDAA did not prescribe how the information was supposed to be reported, and they made a professional judgment on how best to report it. According to DOE’s information quality guidelines, the quality of information is measured, in part, by its utility, which the guidelines defined as the usefulness of the information to intended users. Because the information on the costs of SSCs is required to be included in NNSA’s report in its annual congressional budget justification materials, the intended users of the SSC information are the congressional appropriations and authorizing committees. However, staff from the Senate and House Armed Services Committees told us that the information on the cost of SSCs in the annual congressional budget justification materials was not fully useful because NNSA reported the amounts obligated over multiple fiscal years. By reporting information in this way, the cost data are not consistent across contracts and are not consistent with other information presented in the budget justification. Specifically: Cost data are not consistent across contracts. For fiscal years 2018 through 2020, NNSA presented the data on obligations to date and maximum value of the contract without identifying the period of time included for each individual contract. This period of time, particularly for the obligations-to-date data, could vary significantly and could represent a period of a few months if the contract was awarded late in the year or multiple years if a contract was reaching the end of its term and option periods. For example, NNSA reported obligating about $3.5 million on one SSC in its fiscal year 2019 annual congressional budget justification materials. Based on our analysis, NNSA obligated this amount over 4 years in amounts ranging from about $170,000 to about $1.2 million per year. Cost data are not consistent with other information in the budget justification. Other information in the annual congressional budget justification materials—which is used to support annual appropriations decisions or the budget request for the coming year—is subject to requirements in OMB’s Circular A-11, which states that agencies should generally present financial information in terms of budgetary resources by year in the annual congressional budget justification materials. As presented, users of the annual congressional budget justification materials could be unintentionally misled by the information that NNSA reported on its SSCs. For example, NNSA reported in its annual congressional budget justification materials for fiscal year 2020 that the maximum contract value for its SSCs in fiscal year 2018 totaled about $824 million and included 884 FTE contractor personnel, as shown in figure 5. Although the columns are labelled appropriately, users of the annual congressional budget justification materials could misinterpret the information to include obligations over a single year, and the user could—incorrectly—assume that NNSA spent an average of about $930,000 per contractor FTE. Budget Justification Materials Do Not Identify Specific Appropriations Accounts Used to Fund SSCs The NDAA for Fiscal Year 2016 directs NNSA to report annually in its congressional budget justification materials whether program or program- direction funds supported each SSC as of the date of the report. NNSA identified whether it funded each SSC through “program” or “Federal Salaries and Expenses” (which is program direction) accounts in its congressional budget justification materials for fiscal years 2017 through 2020 and totaled the cost data—which, as discussed earlier, represent multiple fiscal years of contract obligations—included in the table across all reported contracts (see fig. 5). As previously discussed, according to DOE’s information quality guidelines, quality information is measured by the usefulness of the information to the intended users. Staff from the Senate and House Armed Services Committees told us that the information on the funding source reported in the annual congressional budget justification materials was not fully useful because the budget justifications did not specify which program appropriation account—“Weapons Activities” or “Defense Nuclear Nonproliferation”—NNSA used to fund the SSCs and did not total the obligations by funding source. According to NNSA officials, they reported what was required by law. The NDAA directs NNSA to identify the funding source—either program or program direction accounts—for each SSC but does not specify that NNSA must report on the specific appropriations account or total the amount obligated by account. Based on our analysis of FPDS-NG data, NNSA’s obligations to SSCs varied significantly across the three appropriations accounts. For example, in fiscal year 2018, 84 percent of NNSA’s obligations for SSCs (about $162 million of the $194 million obligated for SSCs in that year) were from program appropriations and 15 percent (over $29 million) were from the Federal Salaries and Expenses account (see fig. 6). Of the amounts obligated for SSCs from program accounts in fiscal year 2018, 65 percent were from the Weapons Activities account, with the remaining 35 percent from the Defense Nuclear Nonproliferation account. These amounts represent about 1 percent of the total appropriations for Weapons Activities and about 3 percent for Defense Nuclear Nonproliferation. NNSA is reporting whether program or program direction funds support the contracts, as required. As previously discussed, NNSA guidance states that offices should use program funding for SSCs that produce deliverables and short-term, specific program-related technical support. However, by reporting in NNSA’s annual congressional budget justification materials the specific program appropriations account— Weapons Activities or Defense Nuclear Nonproliferation—used to fund each SSC and totaling the amounts obligated by appropriations account, NNSA would have more reasonable assurance that Congress had insight into which programs the SSCs supported. This reporting could facilitate congressional oversight of NNSA’s use of funds for SSCs by account and could assist NNSA in workforce planning should Congress reevaluate its FTE cap. NNSA May Not Be Effectively Managing Potential Risks of Contractors Performing Inherently Governmental Functions NNSA identifies SSCs that are more likely to have the potential of including inherently governmental functions in its input to DOE’s annual service contract inventory analysis and its determination forms, but the agency may not be effectively managing the potential risks of SSCs that it determines may include such functions. The Consolidated Appropriations Act, 2010, requires civilian agencies to submit to OMB annual inventories of their service contracts. According to OMB guidance, the service contract inventory is a tool to assist an agency in better understanding how contracted services are being used to support mission and operations. NNSA’s input to DOE’s annual service contract inventory for fiscal years 2015 through 2017 identified a significant number of SSCs that included functions that approached being inherently governmental. For example, NNSA’s 2017 inventory analysis reported that contract specialists identified 621 of 775 contract actions, totaling over $170 million in obligations in that year, that were more likely to have the potential to include inherently governmental functions. The analysis identified 194 contract actions as closely associated with inherently governmental functions, 10 as critical functions, and 51 as both closely associated with inherently governmental functions and related to critical functions. Based on our analysis of data in FPDS-NG for fiscal year 2018, NNSA identified 37 of its 166 professional SSCs as closely associated with inherently governmental functions and 4 contracts as related to critical functions. Additionally, as discussed previously, prior to awarding an SSC, officials in the office for which the SSC will provide services and the contracting officer fill out a determination form that includes questions about whether the draft performance work statement includes tasks related to the parts of the FAR that identify inherently governmental functions and functions that can approach being inherently governmental. Tasks included in the performance work statements for SSCs vary widely and could present unique risks for including inherently governmental functions. The purpose of the determination form is to mitigate the risk of awarding an SSC that includes inherently governmental functions. The determination forms include a statement that, among other things, the agency has sufficient capacity and capability to give special management attention to contractor performance, limit or guide the contractor’s exercise of discretion, and avoid or mitigate conflicts of interest. To better understand how NNSA manages the risks of SSCs including inherently governmental functions, we reviewed the performance work statements for SSCs in our sample and, for contracts that had the potential to include inherently governmental functions, discussed how the contracting officials oversee contracts. For one contract we reviewed, the performance work statement called for the contractor to award contracts on behalf of NNSA with foreign organizations and review deliverables and technical performance. The FAR lists awarding contracts and administering contracts as two examples of functions considered to be inherently governmental. The contracting officials overseeing this contract said they do not typically see such a task in a performance work statement but noted that the contract was originally awarded in 2012, prior to those officials’ oversight of the contract. Contract oversight can change throughout the life of a contract—which can extend to 5 years and beyond—and the contracting officials assigned to manage an SSC can change throughout the contract. The contracting officials also told us that they were not concerned that the contract could include inherently governmental functions, as the program office supported by this contract was heavily involved in the activity. The FAR, however, states that awarding contracts and administering contracts are considered to be inherently governmental functions. In another contract we reviewed, the performance work statement included activities that, among other things, involved contractors conducting annual visits to a foreign country to meet and confer with military and governmental officials to develop opportunities for greater access by NNSA to foreign officials. The FAR lists the conduct of foreign relations and the determination of foreign policy among functions considered to be inherently governmental. The contracting officials for the contract said that the program office reviews information to be presented during the visits in advance of the meetings and that federal officials attend some of the meetings, allowing NNSA to ensure that the functions performed by the contractor do not become inherently governmental. In 2011, the Office of Federal Procurement Policy issued a policy letter that states agencies should review, on an ongoing basis, the functions being performed by their contractors, paying particular attention to the way in which contractors are performing, and agency personnel are managing, contracts involving functions that are closely associated with inherently governmental functions and contracts involving critical functions. According to the policy letter, these reviews should be conducted in connection with the development and analysis of inventories of service contracts. The policy letter also calls for agencies to ensure that they have sufficient internal capability to control their missions and operations. Additionally, according to the Consolidated Appropriations Act, 2010, after submitting the service contract’s inventories, the agency must review the contracts and information in the inventory and ensure that, among other things: the agency is giving special management attention to functions that are closely associated with inherently governmental functions; the agency is not using contractor employees to perform inherently governmental functions; the agency has specific safeguards and monitoring systems in place to ensure that work that contractors are performing has not changed or expanded during performance to become an inherently governmental function; the agency is not using contractor employees to perform critical functions in such a way that could affect the agency’s ability to maintain control of its mission and operations; and there are sufficient internal agency resources to manage and oversee contracts effectively. DOE’s service contract inventory analysis for fiscal year 2017 stated that NNSA offers training on inherently governmental contracts on a periodic basis and also uses the determination form, which is completed before the contract is awarded, to ensure that all contracts with inherently governmental potential receive proper attention. However, these steps may not allow NNSA to effectively manage the potential risks of contractors performing inherently governmental functions throughout the life of the contract. First, officials complete the required determination forms prior to awarding an SSC, and NNSA does not take steps to ensure that contracting officers document the steps that they plan to take to oversee specific SSCs, including those the agency determined carry a risk for the performance of inherently governmental functions. This is, in part, because the determination form does not require the contracting officers to include such information on the form. By documenting on the determination form specific steps that the contracting officer plans to take to address the risks of the particular contract, NNSA can better ensure that the functions contractors are performing and the way they perform them do not evolve into inherently governmental functions. Second, NNSA has no process—in connection with the development and assessment of the service contract inventory or another process—to verify that contracting officers are performing planned oversight. Under federal internal control standards, management should design control activities to achieve objectives and respond to risks, such as by comparing actual performance to planned or expected results and analyzing significant differences. By developing a process to verify that the contracting officer has implemented the planned oversight steps for SSCs that have a high risk of including inherently governmental functions throughout the term of the contract, NNSA would have better assurance that planned oversight was being carried out. Taking these actions could also help NNSA better ensure that planned oversight steps continue, even if the contracting officer or other oversight official changes during the term of the contract. Conclusions Since 2010, NNSA has increasingly used professional SSCs across the agency to meet the demands of its increasing workload at a time when the size of its federal workforce has decreased. However, the use of SSCs can also prove challenging, as many of the services categorized as professional and management may be closely aligned with inherently governmental functions, increasing the risk that contractors may inappropriately influence the government’s authority, control, and accountability for decisions. We identified four ways NNSA could improve the completeness and usefulness of its reporting on its SSCs in its annual congressional budget justification materials. Such efforts could assist with congressional decision-making. First, NNSA did not include data on all professional SSCs to which funds were obligated during the fiscal year. By including such data, NNSA could provide more complete information on the number of SSCs used to meet mission requirements, assisting Congress in making better informed decisions about NNSA’s annual appropriations levels. Second, NNSA did not report information on the number of FTE contractor personnel working under the same contract for more than 2 years. NNSA officials identified difficulties in collecting the information. Collecting the information, as required by law, could provide Congress and NNSA’s own decision-makers with greater insight into how NNSA is using its SSCs. Third, NNSA did not present the cost of its SSCs in terms of obligations for 1 fiscal year. By reporting annual obligations data for each SSC, NNSA could more accurately represent its annual budgetary needs for the support needed to perform its missions. Fourth, NNSA did not identify the specific appropriations accounts used to fund SSCs. By identifying such accounts, NNSA would have more reasonable assurance that Congress had insight into which programs the SSCs supported, facilitating congressional oversight of NNSA’s use of funds for SSCs by account and assisting NNSA in workforce planning should Congress reevaluate NNSA’s FTE cap. Additionally, we identified two ways that NNSA could better manage the potential risks of contractors performing inherently governmental functions over the life of a contract. First, NNSA has not taken steps to ensure that contracting officers document the steps that they plan to take to oversee SSCs identified as at high risk of including inherently governmental functions on the determination forms. Second, NNSA does not have a process to verify that contracting officers are performing planned oversight for contracts that NNSA has identified as more likely to have the potential of including inherently governmental functions. By taking steps to document and verify that contracting officers have implemented the planned oversight steps for SSCs that may include inherently governmental functions throughout the term of the contract, NNSA would have better assurance that planned oversight was being carried out. Recommendations for Executive Action We are making the following six recommendations to NNSA: The Associate Administrator for Acquisition and Project Management should report information on all professional SSCs to which funds were obligated during the fiscal year in its annual congressional budget justification materials. (Recommendation 1) The Associate Administrator for Acquisition and Project Management should collect and report all required data regarding the number of FTE contractor personnel employed under an SSC for more than 2 years. (Recommendation 2) The Associate Administrator for Acquisition and Project Management, in coordination with NNSA’s Office of Management and Budget, as appropriate, should report annual obligations data by fiscal year, as part of its reporting on SSCs in annual congressional budget justification materials. (Recommendation 3) The Associate Administrator for Acquisition and Project Management should report in NNSA’s annual congressional budget justification materials the program appropriations account—Weapons Activities or Defense Nuclear Nonproliferation—used to fund each SSC and total the amounts obligated by appropriations account. (Recommendation 4) The Associate Administrator for Acquisition and Project Management should take steps to ensure that contracting officers document—in the required determination form or elsewhere in the contract file—information on the steps that the contracting officers plan to take to oversee SSCs that NNSA has determined to be at high risk of including inherently governmental functions. (Recommendation 5) The Associate Administrator for Acquisition and Project Management should develop a process to verify that contracting officers are carrying out the steps identified to oversee contracts at risk of including inherently governmental functions throughout the term of the contract. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to NNSA for review and comment. In its written comments, which are reproduced in full in appendix II, NNSA generally agreed with the report’s six recommendations and described actions that it intends to take in response to them. With regard to the second recommendation to collect and report required data on the number of full-time equivalent contractor personnel employed under an SSC for more than 2 years, we recognize the difficulties in collecting this information and appreciate that the agency intends to meet with congressional staff to discuss ways to address this issue. We continue to believe that collecting this information will provide NNSA and congressional decision-makers with greater insight into how NNSA uses its SSCs, including whether these SSCs display any of the characteristics of personal services contracts. With regard to the fifth recommendation to take steps to ensure that contracting officers document information on the steps the contracting officers plan to take to oversee SSCs that are determined to be at high risk of including inherently governmental functions, NNSA stated that it considers the recommendation closed based on processes already in place as well as the complementary activities discussed in response to our sixth recommendation. We continue to believe that documenting planned oversight activities in the contract files is important to ensure that planned oversight is consistent throughout the duration of the contract, particularly in light of OMB’s call for agencies’ ongoing review of the functions performed by its contractors and the potential for contracting officers to change over the life of the contract. The agency also provided technical comments, which we incorporated into our report, as appropriate. We are sending copies of this report to appropriate congressional committees, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Scope and Methodology This report examines the extent to which: (1) the National Nuclear Security Administration (NNSA) used professional support service contracts (SSC) in fiscal years 2010 through 2018, (2) the information about SSCs in NNSA’s annual congressional budget justification materials for fiscal years 2017 through 2020 is complete and useful to support congressional decision-making, and (3) NNSA manages the potential risks of SSCs that it determines are at high risk for providing inherently governmental functions. Overall, our review focused on NNSA’s use of professional SSCs. For the purposes of this report, we define professional SSCs to include contracts for activities such as program management support, administrative assistance, technical assistance, and engineering and technical services, consistent with NNSA’s definition of professional SSCs used to report the required information in its annual congressional budget justification materials. We excluded NNSA’s Office of Naval Reactors from our review because it is managed as a separate entity within NNSA. To address the first objective, we obtained and analyzed data on NNSA’s professional SSCs for fiscal years 2010 through 2018 from the Federal Procurement Data System –Next Generation (FPDS-NG), including the contract number, the amounts obligated to the contract in the fiscal year, the funding source, and the product service code assigned to the contract. We performed electronic testing of the data to identify missing data, obvious errors, or outliers and reviewed documentation and determined the data were sufficiently reliable to summarize the number of SSCs, amounts obligated, funding sources, and product service codes for NNSA’s SSCs in fiscal years 2010 through 2018. Unless otherwise specified, we report dollar figures as current dollars. In selected places, we also report inflation-adjusted dollars that are in constant 2018 dollars and were computed using a gross domestic product price deflator. To determine the kinds of tasks for which NNSA used its SSCs, we reviewed performance work statements for a nongeneralizable sample of 12 contracts. We selected contracts from the 407 SSCs NNSA reported in its annual congressional budget justification materials for fiscal years 2017 through 2019. We selected contracts that ranged in award amounts and represented work performed for different NNSA offices. In addition, to understand changes in NNSA’s use of SSCs, we analyzed data on NNSA’s appropriations and the number of federal full-time equivalent (FTE) employees for fiscal years 2010 through 2018. NNSA provided data on FTEs as of the last day of the last pay period of each fiscal year. We did not include federal FTE data by program office prior to fiscal year 2013 because NNSA restructured the organization, and the organizational structure prior to 2013 was not comparable to the current organization structure. We reviewed the data for obvious errors or outliers and compared the federal FTE data to other sources and discussed the data with officials and determined the data were sufficiently reliable to show changes in the size of NNSA’s work force over the time period. We also obtained and analyzed data by program office on the number of FTE contractor personnel from fiscal years 2015 through 2018. According to an NNSA official, NNSA did not collect data on FTE contractor personnel prior to fiscal year 2015. We reviewed the data for obvious errors or outliers and interviewed NNSA officials knowledgeable about the process to collect the data and NNSA officials that completed an internal study that, among other things, independently collected and verified the number of FTE contractor personnel by program office. Although we identified that NNSA did not report data on all of its SSCs, we determined the data were sufficiently reliable to illustrate changes in the number of FTE contractor personnel by program office for fiscal years 2015 through 2018. Further, to determine how NNSA uses its SSCs, we also reviewed two NNSA workforce studies and interviewed agency officials in program offices that used SSCs in fiscal years 2015 through 2018. To address the second objective, we compared the information on SSCs in NNSA’s annual congressional budget justification materials for fiscal years 2017 through 2020 with the requirements in the NDAA for fiscal years 2016 and 2017. We also reviewed documentation and interviewed NNSA officials from the Office of Acquisition and Project Management to determine how they prepared the information included in the annual congressional budget justification materials. We compared NNSA’s process for reporting information on SSCs to DOE’s information quality guidelines, particularly the sections related to completeness and usefulness of the information. Additionally, we compared the data on SSCs included in NNSA’s annual congressional budget justification materials to data in FPDS-NG to determine whether NNSA included all of its SSCs in the budget justification. To perform this analysis, we obtained data from FPDS-NG on all of NNSA’s active SSCs for fiscal years 2015 through 2018. We assessed the reliability for these data as described previously. For each fiscal year, we included only the SSCs that met NNSA’s definition of professional SSCs using the 77 product service codes. We also removed from the data any contracts listed that had $0 obligations or negative obligations for the fiscal year. For the remaining contracts, we compared the task order or contract numbers included in the FPDS-NG data to the task order or contract numbers that NNSA reported in its annual congressional budget justification materials. For those contracts where there was not a match between the annual congressional budget justification materials data and the FPDS-NG data on the task order or contract number, we reviewed the data manually to ensure there was not an error in the formula used or an error in the data that was easily identifiable, such as a transposed or missing digit in the task order or contract number. We discussed the list of contracts that was not included in NNSA’s annual congressional budget justification materials with officials responsible for the reporting to determine why the contracts were excluded. To address the third objective, we reviewed documents, such as applicable Federal Acquisition Regulation (FAR) provisions and NNSA policy documents, and interviewed officials from NNSA’s Office of Acquisition and Project Management, Office of Management and Budget, and Office of General Counsel to determine how NNSA oversees its SSCs. We also reviewed performance work statements for the nongeneralizable sample of 12 contracts discussed above to identify oversight activities and determine whether they included examples of tasks that could have characteristics of inherently governmental functions. We reviewed determination forms for eight of the 12 SSCs in our sample for which NNSA could provide the forms. We also interviewed NNSA’s contracting officers or contracting officer’s representatives and representatives from 11 of the 12 contractors in our sample to learn how NNSA and the contractors manage the contracts. When referring to the findings from these interviews, we use “some” to refer to 3 to 4 interviews, “several” to refer to 5 to 6 interviews, “many” to refer to 7 to 9 interviews, and “most” to refer to 10 to 11 interviews. In addition, we reviewed NNSA’s service contract inventory analysis reports from fiscal years 2015 through 2017 to obtain information on contracts that NNSA had identified as having the potential to include inherently governmental functions. We conducted this performance audit from October 2017 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the National Nuclear Security Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Hilary Benedict (Assistant Director); Bridget Grimes (Analyst in Charge); Ellen Fried; Cindy Gilbert; Elizabeth Jimenez; Julia Kennon; Dan C. Royer; Sylvia Schatz; and Tatiana Winger made key contributions to this report.
Why GAO Did This Study The Department of Energy's NNSA relies on federal employees and contractor personnel to carry out its mission. SSCs fill essential needs, and their use requires special diligence to ensure applicable statutes, regulations, and management practices are followed. The House report on the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to report on NNSA's use of SSCs. This report examines the extent to which: (1) NNSA used SSCs for professional support in fiscal years 2010 through 2018; (2) the information about SSCs in NNSA's annual congressional budget justification materials for fiscal years 2017 through 2020 is complete and useful to support congressional decision-making; and (3) NNSA manages the potential risks of SSCs that it determines are at high risk for providing inherently governmental functions. GAO analyzed agency data; reviewed documentation; and interviewed federal and contractor officials representing a non-generalizable sample of 12 SSCs out of 407, selected to represent a range of years and contract obligations. What GAO Found The National Nuclear Security Administration (NNSA) obligated about $193 million in fiscal year 2018 for support service contracts (SSC), an increase of nearly 40 percent since 2010. These contracts provide a variety of professional support services, such as program management support. Officials attribute the increased use of SSCs to increases in appropriations and workload for the modernization of nuclear weapons and related infrastructure and decreases in the number of authorized federal staff due to the decrease in the statutory cap from fiscal year 2014 to 2015. Information on SSCs in NNSA's congressional budget justification materials is not complete or fully useful for congressional decision-making because, among other things, NNSA did not include information on all of its professional SSCs. NNSA is required to report annually certain information about SSCs, including the number and cost of SSCs, in its materials. NNSA reported information on its SSCs in its materials for fiscal years 2017 through 2020. However, NNSA's reporting was not complete because NNSA excluded information on 31 to 42 contracts each year (see fig. for fiscal year 2020). According to officials, they excluded contracts that expired during the fiscal year. By reporting information on all professional SSCs to which funds were obligated during the fiscal year, NNSA could provide more complete information to Congress that it could use to make better informed decisions about NNSA's annual appropriations levels. NNSA may not be effectively managing the potential risks of contractors performing inherently governmental functions—those that must be performed by a government employee—for contracts NNSA identifies as having the potential for providing such functions. NNSA identifies such SSCs through required assessments. However, contracting officers are not required to document planned steps to oversee these contracts, and the agency does not verify that planned oversight is performed. Contracting officers who oversee SSCs can change during the life of a contract. By documenting steps that contracting officers plan to take to oversee contracts with a high risk of including inherently governmental functions—and verifying that the planned oversight occurs—NNSA can better ensure over the life of the contract that the functions contractors are performing do not evolve into inherently governmental functions and that planned oversight is completed. What GAO Recommends GAO is making six recommendations to NNSA, including that NNSA: (1) report information on all professional SSCs to which funds were obligated during the fiscal year; (2) document plans to oversee SSCs that have a high risk of including inherently governmental functions, and (3) verify that the planned oversight occurs. NNSA generally agreed with the recommendations.
gao_GAO-20-216
gao_GAO-20-216_0
Background NMFS and the eight regional fishery management councils are responsible for managing approximately 460 fish stocks in federal waters, as shown in figure 1. NMFS has overall responsibility for collecting data on fish stocks and ocean conditions and for generating scientific information for the conservation, management, and use of marine resources. NMFS carries out this responsibility primarily through its five regional offices and six regional fisheries science centers, which are responsible for collecting and analyzing data to conduct stock assessments. Stock assessments consider information about the past and current status of a managed fish stock, including information on fish biology, abundance, and distribution that can be used to inform management decisions. To the extent possible, stock assessments also predict future trends of stock abundance. NMFS provides the results of its stock assessments and other analyses, as appropriate, to the councils for use in implementing their respective fisheries management responsibilities. In the South Atlantic and Gulf of Mexico regions, NMFS provides support to the councils’ management efforts through its Southeast Regional Office and the Southeast Fisheries Science Center. Under the Magnuson-Stevens Act, the councils are responsible for managing the fisheries in their region. This includes developing fishery management plans, subject to NMFS approval, based on the best scientific information available and through collaboration with a range of stakeholders. The councils convene committees and advisory panels to assist them in developing research priorities and selecting fishery management options, in addition to conducting public meetings. The councils are to comprise members from federal and state agencies, as well as the commercial and recreational fishing sectors (see fig. 2). The councils—supported by council staff such as biologists, economists, and social scientists—are responsible for preparing proposed fishery management plans or plan amendments for NMFS review. These plans or amendments are to identify, among other things, conservation and management measures to be used to manage a fishery, including determining the maximum size of a fish stock’s allowable harvest. This is generally done by developing annual catch limits for each fish stock, that is, the amount of fish that can be harvested in the year. Fishery management plans or amendments also include establishing or revising any allocations between the commercial and recreational sectors for mixed-use fish stocks where the councils determine it may be warranted. For example, councils may allocate a percentage of a fish stock’s annual catch limit between the recreational and commercial fishing sectors. See figure 3 for an overview of the federal fisheries management process. Council staff facilitate the fisheries management process by organizing council meetings, preparing and providing analyses for those meetings, and facilitating input from stakeholders and the public on fisheries management issues, among other things. Stakeholders include participants in the commercial and recreational fishing sectors and related industries, such as fishing associations, seafood dealers and processors, food and travel industry representatives, and conservation groups. Once the councils complete proposed fishery management plans or plan amendments, they are to provide them to NMFS for review. NMFS is responsible for determining if the plans or amendments are consistent with the Magnuson-Stevens Act and other applicable laws, and for issuing and enforcing final regulations to implement approved plans. Tables 1 and 2 highlight the mixed-use fish stocks the South Atlantic and Gulf of Mexico councils manage, respectively. Fisheries Allocations Under the Magnuson-Stevens Act’s national standards for fishery management plans, allocations are to be fair and equitable to all U.S. fishermen; reasonably calculated to promote conservation; and carried out in such manner that no particular individual, corporation, or other entity acquires an excessive share. NMFS guidelines for the national standards further indicate that in making allocations, councils should consider certain factors relevant to the fishery management plan’s objectives. These factors include economic and social consequences of the allocations, food production, consumer interest, dependence on the fishery by present participants and coastal communities, efficiency of various types of gear used in the fishery, transferability of effort to and impact on other fisheries, opportunity for new participants to enter the fishery, and enhancement of opportunities for recreational fishing. In reviewing and approving fishery management plans and amendments, NMFS is responsible for ensuring that the councils’ allocation decisions comply with the Magnuson-Stevens Act’s national standards. In this report, the terms “established” and “revised” allocations refer to allocations established or revised by the councils and subsequently approved by NMFS, unless otherwise stated. Historically, mixed-use fisheries allocations have been based predominantly on data estimating each fishing sector’s past use of the resource, according to NOAA. To collect commercial and recreational data, NMFS works with partners such as coastal states and interstate marine fisheries commissions. In particular, for the commercial fishing sector, NMFS collects data on landings, which include the weight and value of fish stocks sold to seafood dealers using a network of cooperative agreements with states. For recreational fishing, NMFS uses data from its Marine Recreational Information Program, which the agency began implementing in 2008 in place of the Marine Recreational Fisheries Statistics Survey. The Marine Recreational Information Program collects data on private anglers’ fishing effort and catch rates and uses these to estimate total recreational fishing catch. NMFS officials said that the program also collects information to estimate recreational landings. The program collects these data through such methods as mail surveys and shore-side interviews of anglers at public access fishing sites. Recognizing the difficulty in making allocation decisions—in part because allocations may be perceived as unfair by some stakeholders—NMFS commissioned a nationwide study in 2012 to examine allocation issues and gain stakeholders’ perspectives from commercial and recreational fishing sectors. The results of the study showed widespread dissatisfaction with how past allocation decisions were made. The study found little consensus on how to address concerns with allocations. For example, some stakeholders said that some allocations were outdated and that changes over time in human population, seafood demand, and recreational fishing warranted a comprehensive examination of allocations. Other stakeholders expressed concern that a uniform approach to allocation policy could harm fishing sectors, while others noted that it is important for the councils to have the flexibility to make regionally-focused decisions. The study concluded that many stakeholders may continue to view allocations as unbalanced or unfair unless the outcomes align with the positions they seek. The study recommended that NMFS take a number of steps to address allocation issues, including increasing stakeholder engagement in allocation decisions, periodically reviewing allocations, and creating a list of factors to guide allocation decisions. In response to the 2012 study, NMFS issued a fisheries allocation review policy in 2016 and two guidance documents to the councils, intended to help the councils and NMFS review and update allocations. The objective of the NMFS policy was to describe the fisheries allocation review process, which called for using an adaptive management approach. NMFS policy defined fisheries allocation review as the evaluation that leads to the decision of whether or not the development and evaluation of allocation options is warranted, but the allocation review is not, in and of itself, an implicit trigger to consider alternative allocations. Through its policy, NMFS established a multi-step process for reviewing and potentially revising fisheries allocations. Specifically, once an allocation review trigger has been met (as described below), the councils are to complete an allocation review. For this review, NMFS policy does not call for in-depth analyses but calls for a clear articulation of how objectives are or are not being met and a clear rationale and documentation on relevant factors considered. Based on the allocation review, the councils may decide to maintain existing allocations, or proceed to evaluate allocation options for a fishery management plan amendment. When proceeding with this next step, the councils are to undertake formal analyses and follow the fishery management plan amendment process to ultimately recommend that an existing allocation either be retained or revised. To supplement its fisheries allocation review policy, NMFS also issued two guidance documents, as follows: Criteria for initiating fisheries allocation reviews. NMFS guidance recommended that the councils establish criteria for initiating allocation reviews—or allocation review triggers—within 3 years, or as soon as practicable, for all fisheries that have allocations between sectors. The guidance identified three types of potential criteria for allocation review triggers: (1) time-based, which include provisions for periodic allocation reviews at specific time intervals on a regular basis; (2) public interest-based, which provide an opportunity for the public to express interest in allocation reviews; and (3) indicator-based, such as triggers based upon economic or other metrics. Factors to consider when reviewing and making allocation decisions. NMFS guidance outlined four categories of factors for the councils to consider when making allocation decisions, and noted that there may also be other appropriate factors to consider. These factors are not intended to prescribe particular outcomes with respect to allocations, but rather are intended to provide a framework for analysis, according to the guidance. The four categories of factors include: Fishery performance and change factors, to assess the current conditions of a fishery and any changes in those conditions that may indicate a need for updated allocations. Such factors could include historical or current trends in catch or landings, the status of the fish stock (for example, whether it is subject to overfishing, is overfished, or is rebuilding), or changes in the distribution of species within the fishery. Economic factors, to consider the monetary consequences of an allocation, such as by analyzing (1) whether the existing or recommended allocation is the most economically efficient, and (2) the economic impacts of the allocation. Social factors, to assess the consequences of an allocation on individuals and communities, such as whether an allocation may have disproportionate adverse effects on low income or minority groups or could lead to fishing despite unsafe conditions if access to the fishery is restricted to a limited number of days. Ecological factors, to consider the potential ecological impacts of allocations, such as impacts on the habitat or predator-prey dynamics of the fishery or of other fisheries within the ecosystem. South Atlantic and Gulf of Mexico Councils Have Established and Revised Allocations to Varying Degrees Since the Magnuson-Stevens Act was passed in 1976, the South Atlantic and Gulf of Mexico councils have established and revised allocations to varying degrees for the mixed-use fish stocks they manage in their regions. The South Atlantic council has established allocations for almost all of its mixed-use fish stocks and the Gulf of Mexico council has done so for certain stocks. South Atlantic Council Has Established Allocations for Almost All Mixed-Use Fish Stocks and Revised Most of those Allocations in 2012 Based on documents from the South Atlantic council, we found that the council has established allocations for 50 of the region’s 51 mixed-use fish stocks. The council first established an allocation for one fish stock—king mackerel—in 1985. From 1987 through 2010, the council set allocations for eight fish stocks. The council then established most allocations, encompassing 40 of its mixed-use fish stocks, in 2011, with allocations generally based on estimates of each fishing sector’s historical landings. The council’s most recently established allocation—for a cobia stock—was in 2014, according to council documents. Appendix I provides additional information on the allocations for the mixed-use fisheries in the South Atlantic council region and the years in which the council established and revised allocations. According to South Atlantic council staff, the council’s approach to revising allocations has been to rely on stakeholder input to inform them of allocations that may need revision but to otherwise leave established allocations in place. For example, council staff noted that the allocation for king mackerel—which distributes a percentage of the annual catch limit to each fishing sector—has not changed since 1985 because it is still effective for both the commercial and recreational fishing sectors. Council staff explained that because neither sector has typically caught the amount of king mackerel they have been allocated, the council has not needed to revise the allocation. As of December 2019, the South Atlantic council had revised allocations for most of their mixed-use fish stocks once, according to council documents, as shown in table 3. The council revised allocations for 30 fish stocks in 2012, based on changes to the source of recreational catch data the council was using in its formulas for calculating allocation percentages. The South Atlantic council has revised few allocations more than once. Specifically, they revised allocations for two fish stocks twice and for one, dolphin, three times. For example, the council first established an allocation for dolphin (also known as mahimahi, dolphinfish, and dorado) in 2003. It established the allocation to maintain the fishery as predominantly recreational and based the allocation on historical landings, according to the council’s fishery management plan (see fig. 4). According to council documents, the council then revised the dolphin allocation three times: in 2011, when initially setting annual catch limits for dolphin, in 2013, based on changes to the source of recreational catch data used to calculate allocation percentages, and in 2015, because the recreational sector had not been catching the amount of fish it was allocated, and the council was concerned that the commercial sector could exceed its allocation in the future. The extent to which the South Atlantic council may have considered other revisions to allocations is unclear. For example, South Atlantic council staff said that their council had deliberated on revising allocations for some fish stocks at council meetings, but they do not have records of the deliberations because the council decided not to make revisions and did not initiate related fishery management plan amendments. South Atlantic council staff explained that they document all allocation revisions through fishery management plan amendments, but they have not otherwise formally documented reviews that did not result in revisions. Council staff said they recognize the need to better document such reviews in the future; however, the council did not identify how it plans to do so, as discussed later in this report. Gulf of Mexico Council Has Established Allocations for Certain Mixed-Use Fish Stocks and Revised Three of Those Allocations in 2008 The Gulf of Mexico council established commercial and recreational allocations for nine of the region’s 23 mixed-use fish stocks, according to documents from the council (see app. I for allocations for the mixed-use fisheries in the Gulf of Mexico council region). Council staff said most of the council’s allocations were made based on estimates of each sector’s historical landings. The council has not established allocations for most mixed-use fish stocks in the region because allocations for these stocks have not been warranted, according to council staff. Council staff said the council generally considers establishing allocations when stakeholders identify issues, or if new information such as a stock assessment becomes available and indicates that allocations may be needed to help manage a fish stock. In the absence of such information, the Gulf of Mexico council manages the fish stocks with other methods— for example, with seasonal closures or trip or bag limits, which establish the number of fish that can be legally taken in a specified period. As of December 2019, the Gulf of Mexico council had revised allocations for three mixed-use fish stocks, as shown in table 4. For example, the council revised the allocation for red grouper in 2008 to increase the recreational sector’s allocation after a stock assessment indicated the fishery had recovered from overfishing, according to a council document. In 2008, the council also revised the gag grouper allocation to increase the commercial sector’s allocation. In addition, the Gulf of Mexico council completed a fishery management plan amendment in 2015 that revised the red snapper allocation by increasing the recreational sector’s percentage. However, after the Secretary of Commerce approved the amendment in 2016, a U.S. District Court vacated the amendment in 2017, and the council returned to the initial allocation established for red snapper. Gulf of Mexico council staff said the council has not identified a need to revise allocations for the other mixed-use fish stocks in the region with allocations. For instance, for the deep water grouper and tilefish complexes, council staff said there has been limited competition between the recreational and commercial fishing sectors and the council has not needed to revise the allocations initially established for those fish stocks in 2011. When the Gulf of Mexico council has considered revising allocations, it has done so through fishery management plan amendments, according to council staff. For example, in a 2016 fishery management plan amendment, the council considered revising the allocation for king mackerel because estimates indicated that the recreational sector had not been landing the amount of fish it was allocated. However, the council decided not to revise the allocation, citing the potential for increased recreational fishing for king mackerel in the future. Various Sources of Information May Be Available to Help NMFS and the Councils Conduct Allocation Reviews Through our review of agency documents and interviews with NMFS and South Atlantic and Gulf of Mexico council staff, we found that various sources of information may be available to help NMFS and the councils review allocations, but each source presents some challenges to councils for supporting allocation decisions. Councils can use these sources of information to consider the factors NMFS’ 2016 guidance calls for— including fishery performance and change, economic, social, and ecological factors—when reviewing allocations. Five key sources of information that NMFS and the councils identified are trends in catch and landings, stock assessments, economic analyses, social indicators, and ecosystem models. NMFS officials said that the councils would like to incorporate these key sources into their allocation reviews, and use such information in supporting future allocation decisions. However, they said the availability, specificity, or quality of information can present challenges to using some of the information. In particular, they noted that available information other than landings is often sparse and uncertain for many fish stocks. As a result, the officials said it may be difficult for the councils to use such information as the basis for allocation decisions. NMFS is taking some steps to improve the information available, as discussed below. Trends in Catch and Landings NMFS’ 2016 guidance states that changes in the performance or conditions of a fishery may indicate the need for updated allocations. Fishery performance and change factors include trends in catch or landings. Data on historical and current catch and landings can provide the councils with important information about demand, according to NMFS guidance, including whether a fishing sector may be catching above or below its allocation. Generally, NMFS collects landings data for commercial fisheries from state fisheries agencies, who obtain landings data from monthly reports submitted by seafood dealers on the weight and value of fish sold at the dock. NMFS collects data to estimate recreational catch and landings through survey and interview methods through its Marine Recreational Information Program. However, recreational catch estimates present some limitations. A 2017 National Academies study noted that obtaining reliable data on recreational catch can be challenging because of several attributes of the recreational fishing sector. For example, the greater number of recreational anglers compared with the number of participants in the commercial fishing sector, and the greater number of access and landing points available to recreational anglers, make it difficult to obtain reliable data on the extent of recreational fishing, according to the study. In 2018, the Marine Recreational Information Program updated how NMFS estimates recreational catch based on a change in the survey methodology used to collect data from anglers on the Atlantic and Gulf of Mexico coasts. According to NMFS documents, updated recreational catch estimates for many fish stocks are several times higher than previous estimates because of the change in methodology. However, any implications these updated estimates may have for allocations in the South Atlantic and Gulf of Mexico may not be fully understood until NMFS incorporates the estimates into stock assessments, which were scheduled for completion between 2019 and 2021, according to NMFS documents. Further, in the Gulf of Mexico, states collect recreational catch data through their own programs, which supplement NMFS’ Marine Recreational Information Program data. The states’ programs use different methodologies, however, which Gulf of Mexico council staff said make it difficult to reconcile the states’ recreational fisheries data with NMFS’ data on catch estimates. According to an NMFS document, some of the different methodologies the states use to design surveys have produced different estimates in years when two or more surveys were conducted side by side, making it difficult to determine the best estimates of recreational catch in the Gulf of Mexico. NMFS is taking steps to improve its recreational catch estimates. For instance, in September 2019 NMFS issued procedural guidance to help ensure that survey estimates from the Marine Recreational Information Program are based upon the best scientific information available and to promote nationwide consistency in collecting data and estimating recreational catch. NMFS is also working with Gulf of Mexico states to evaluate the critical assumptions made by each state’s data collection program and to help ensure that the states’ recreational catch estimates are comparable across years and with other states. As part of this effort, NMFS is calibrating recreational catch estimates from Gulf of Mexico states with data from the Marine Recreational Information Program. According to an agency official, NMFS anticipates completing this effort in May 2020. Stock Assessments Stock assessments are a key source of information the councils can use to review allocations given the information they provide on the status of fish stocks, according to NMFS documents. Stock assessments can range in complexity from a simple description of historical trends in catch and landings to complex assessment models that incorporate spatial and seasonal analyses in addition to ecosystem or multispecies considerations. Stock assessments are not available for all fish stocks with allocations, however. In the South Atlantic, 32 of the 50 mixed-use fish stocks with allocations do not have stock assessments, according to council staff. Of these fish stocks, NMFS plans to complete stock assessments for three—gray triggerfish, scamp, and white grunt—by 2024, according to South Atlantic council staff. In the Gulf of Mexico, stock assessments are available for the mixed-use fish stocks with allocations, with the exception of the shallow and deep water grouper aggregate complexes. Stock assessments can provide maps of the spatial distributions of fish stocks and may show changes in those distributions over time, according to NMFS officials. Changes in a fish stock’s distribution may lead to allocation disputes, and basing allocations on historical catch may not be appropriate in such situations, according to an NMFS document. NMFS’ 2016 guidance states that the councils may need to update allocations if the distributions of fish stocks change over time for reasons such as climate change or natural fluctuations in abundance. However, NMFS officials noted that few stock assessments incorporate spatial models that would allow forecasts of future spatial distributions. To help improve the availability of such information, NMFS is conducting evaluations that will, among other things, assess changes in the distribution of fish stocks in the Gulf of Mexico and South Atlantic in response to regional climate change impacts. NMFS officials said they anticipate completion of these evaluations in 2020, which will help them forecast future spatial distributions for some fish stocks going forward. In addition, stock assessments are one source of information that the councils can use to assess each fishing sector’s expected ecological impacts, according to NMFS officials. For example, NMFS officials said that stock assessments commonly provide information on each sector’s discards—fish intentionally thrown back. Discards may be caught as bycatch—that is, incidentally to the harvest of the primary fish stock targeted. NMFS’ 2016 guidance states that councils can consider the expected impacts of each fishing sector’s allocation on bycatch and bycatch mortality. However, the availability and certainty of bycatch and discard information can vary, according to NMFS officials. NMFS is taking steps to improve information on bycatch and discards. For instance, beginning in 2020, the for-hire component of the recreational fishing sector is to use an electronic system to report its bycatch and discards in the South Atlantic and Gulf of Mexico, according to NMFS officials. The officials said that the commercial fishing sector will begin using this system by 2023. NMFS officials said that the agency is also developing a model that will, among other things, estimate the number of released fish caught by the recreational fishing sector in the South Atlantic and Gulf of Mexico. The officials said that the first version of the model is focused on gag grouper in the Gulf of Mexico, but that the model could be customized to any fish stock with the necessary data available. As of December 2019, NMFS officials anticipated completion of the model by late 2020 and estimated that the model would be ready to incorporate into stock assessments in fiscal year 2021 or later. Economic Analyses Economic analyses can provide information on the economic consequences of allocations, according to NMFS documents. NMFS’ 2016 guidance notes that councils should consider if the current or preferred allocation results in the most economically efficient use of the fishery resource. According to the guidance and NMFS officials, economic efficiency refers to how well scarce resources are used in production and consumption, and is achieved when all resources are allocated to their most valuable productive use. In principle, an allocation is most economically efficient when the net economic benefits to the commercial and recreational fishing sectors in total are maximized. If net economic benefits are not maximized, then modifying the allocation may increase economic efficiency and economic benefits to the nation. NMFS officials said the agency focuses on conducting economic efficiency analyses to help guide allocation reviews. Economic efficiency analyses can help NMFS and the councils analyze whether a proposed change in an allocation would generate greater net economic benefits for society (that is, improve economic efficiency), compared with the current allocation, according to NMFS officials. We found the councils face challenges in using economic efficiency analyses in allocation decisions. According to NMFS officials and the agency’s published research, reliable data for estimating economic values associated with recreational fishing may not be readily available. This is because no market prices for fish caught by private anglers are available and thus, non-market valuation techniques must be used to estimate the marginal value of fish to recreational anglers. For example, a 2014 NMFS study on the economic efficiency of allocations for gag, red, and black grouper found that there are insufficient data on the recreational harvest by grouper species to generate statistically reliable estimates of economic value for each fish stock. In addition, it is difficult to estimate the economic value associated with one fish stock because recreational anglers may be willing to catch other species of fish if fishery managers limit anglers’ access to a particular stock, according to members of both councils’ socioeconomic panels. This transfer of effort from one fish stock to another makes it difficult to determine which fish stock drives the economic value that anglers associate with fishing. Further, a 2014 NMFS study on the economic efficiency of red snapper allocations indicated that a relevant market price that could be used as a benchmark for the recreational estimates is unavailable. The study found that in prior work the agency attempted to use charter fishing trip prices to address this concern, but no current data on charter prices existed to update that analysis. As a result, the study cautioned against comparing estimates of recreational value to that in the commercial sector, which is a key aspect of determining an economically efficient allocation. Moreover, two 2014 NMFS studies found that there are also methodological and data challenges related to obtaining economic information from the commercial fishing sector. For example, the studies raised questions about the quality of some of the price data that were used in developing estimates of economic values for the commercial sector. In addition, the studies’ estimates of the economic value of commercial fishing did not include the potential net value derived from other components of the commercial seafood supply chain, such as the processing, distribution, and sale of the fish to the end consumers, according to the NMFS studies and agency officials (see fig. 5). These NMFS studies noted that data for estimating the values from these other components are not readily available. Council staff and members, socioeconomic panel members, and fishery stakeholders we interviewed noted the importance of including the value of fish to the end consumers when considering the economic value of commercial fishing. To estimate the values of these other components of the commercial seafood supply chain, NMFS would need information about the consumer demand for fish as a function of domestic and international production, as well as information on changes in the price of the fish as they move from the dockside to retail markets, according to a separate NMFS study. NMFS officials said they are taking some steps related to improving economic analyses that the councils could consider in allocation reviews. For example, the agency is developing a manual of best practices for NMFS and council staff responsible for conducting economic analyses. NMFS officials said that they anticipate completing the manual by the end of fiscal year 2020. According to NMFS officials, the manual is intended to help (1) achieve consistency in analyses across the councils and regions, (2) establish an understanding of why economic analyses of allocations are important to fisheries management decisions, as well as their role in complying with various legal requirements and NMFS’ policy, and (3) establish an understanding of the basic concepts and tools used in these analyses and how they are expected to be applied in practice. In addition, NMFS conducted a study on the economics of the for-hire fishing sector in federal waters of the South Atlantic and Gulf of Mexico and completed a report on the study at the end of 2019. Among other things, agency officials said the study provides data sufficient to estimate producer surplus for the for-hire sector. This information could help inform future allocation decisions, according to NMFS officials. Social Indicators NMFS has developed social indicators to characterize community well- being for coastal communities engaged in fishing activities, which the councils could consider in reviewing allocations, according to NMFS officials. NMFS’ 2016 guidance states that the councils could consider individual, local, and regional fishing dependence and engagement, and that such analyses should include potential impacts on commercial, for- hire, private angler, and subsistence fishing, as well as fishing-related industries if data are available. NMFS’ social indicators are numerical measures that describe the well-being of fishing communities in coastal counties across the United States and their level of dependence on commercial and recreational fishing. For example, one indicator describes the vulnerability of fishing communities to disruptive events, such as a change to a fishing sector’s access to a fishery. Communities that are dependent on commercial fishing can be more socially vulnerable than other communities to changes, according to an NMFS document. However, NMFS’ social indicators on communities’ reliance on and engagement in commercial and recreational fishing are not specific to particular fish stocks. NMFS officials said this makes it challenging for councils to incorporate the information into their allocation reviews for specific fish stocks. The officials said that given current resource limitations and limited data available, it would be difficult to generate social indicators that are specific to fish stocks. In some instances, NMFS has some stock-specific information at the community level for the commercial fishing sector. But NMFS officials said that comparable information is not available for the recreational sector at the community level, making it difficult to develop fish stock-specific social indicators. NMFS officials said that the agency continues to work to update and improve social indicators relevant to recreational and commercial fisheries and is exploring other sources to provide better social data for fisheries management decisions. However, NMFS officials did not identify specific steps they plan to take to improve social indicators—such as developing information specific to particular fish stocks—so that the councils could more easily incorporate such information into their allocation reviews. Ecosystem Models NMFS’ 2016 guidance calls for the councils to consider the potential ecological impacts of allocation alternatives in determining the allocation between different sectors or groups. However, NMFS officials said there are few ecosystem models that incorporate ecological information that could be considered in reviewing allocations, in part because limited quantifiable ecological information is available. They said that it will be difficult to use ecosystem models in allocation decisions until such models are more fully developed. NMFS officials said they are taking some steps to enhance the use of ecological and ecosystem-based information. For instance, they noted that in 2016, NMFS released a policy to, among other things, establish a framework of guiding principles to enhance and accelerate the implementation of ecosystem-based fisheries management. Ecosystem- based fisheries management is a systematic approach to fisheries management in a geographically specified area that: contributes to the resilience and sustainability of the ecosystem; recognizes the physical, biological, economic, and social interactions among the affected fishery- related components of the ecosystem, including humans; and seeks to optimize benefits among a diverse set of societal goals, according to the policy. Among other things, this approach can help communicate the potential consequences of management decisions—including allocations—across fish stocks and improve the understanding of the potential benefits and effectiveness of management decisions, according to the policy. In 2019, NMFS issued plans for implementing ecosystem- based fisheries management in the South Atlantic and Gulf of Mexico. South Atlantic and Gulf of Mexico Councils Developed Criteria for Initiating Allocation Reviews, but Not Processes for Conducting or Documenting Them The South Atlantic and Gulf of Mexico councils each established criteria for initiating allocation reviews in response to NMFS’ 2016 guidance, but neither council has developed processes to guide how they will conduct or document their allocation reviews. The Gulf of Mexico council has taken initial steps to develop a process for how it will review allocations, and staff from both councils said they are waiting for our report to inform their next steps on developing processes for conducting allocation reviews in the future. Both Councils Established Criteria for Initiating Allocation Reviews The North Pacific council plans to review The four councils also identified public input as a potential allocation review trigger, but they did not specify what threshold of public interest would trigger a review. The remaining two councils—the Western Pacific and Caribbean—do not have allocations subject to National Marine Fisheries Service (NMFS) policy requiring councils to establish allocation review criteria, according to NMFS officials. the council reviews a fishery performance report. The South Atlantic council’s policy also established time-based triggers as secondary criteria for initiating allocation reviews. Its policy states that the council will review allocations not less than every 7 years if one of the conditions identified in the policy has not already triggered a review. The policy also states that once a review occurs, the next one will be automatically scheduled for 7 years later. In contrast, the Gulf of Mexico council’s April 2019 policy established time-based triggers as its primary criteria for initiating allocation reviews. Specifically, its policy indicates time intervals of 4 to 7 years for reviewing allocations, depending on the particular fish stock, and identifies the planned month and year for beginning each review. The council’s policy also identified public interest as a secondary allocation review trigger but did not specify thresholds for the level or type of public input that would trigger an allocation review. According to the policy, the council is to consider relevant social, economic, and ecological conditions as an intermediate step before determining whether public interest will trigger a review. According to NMFS’ 2016 guidance, periodic review of allocations on a set schedule is in several respects the most simple and straightforward criterion for such a review—it is unambiguous and less vulnerable to political and council dynamics. The guidance also states that time-based triggers for initiating allocation reviews might be most suitable for fisheries where the conflict among sectors or stakeholder groups makes the decision to simply initiate a review so contentious that use of alternative criteria is infeasible. In such a situation, a fixed schedule ensures that periodic reviews occur regardless of political dynamics or specific fishery outcomes, according to the guidance. However, the guidance also indicates that, compared with alternative approaches, adherence to a fixed schedule may be less sensitive to other council priorities and the availability of time and resources to conduct such reviews, which could potentially lead to significant expenditures. Therefore, given the inflexible nature of time-based triggers, the guidance recommends that they be used only in those situations where the benefit of certainty outweighs the costs of inflexibility. The South Atlantic and Gulf of Mexico councils’ policies laid out planned schedules for their respective allocation reviews, which both councils adjusted after issuing their policies. Table 5 shows both councils’ plans for allocation reviews as of December 2019. For example, the Gulf of Mexico council’s policy states that it plans to review the red grouper allocation in 2026. However, in response to the completion of an updated stock assessment for red grouper in July 2019, the council directed its staff in October 2019 to begin work on a fishery management plan amendment to update the red grouper allocation, according to a council document. The stock assessment for red grouper included the Marine Recreational Information Program’s updated estimates for recreational landings. The updated estimates approximately doubled previous estimates of recreational landings, according to a council newsletter. Council staff said that applying these updated estimates to the time series the council had used to establish the red grouper allocation could result in a percentage shift of the allocation to the recreational fishing sector. As a result, the council decided to begin review of the red grouper allocation sooner than the policy’s scheduled 2026 time frame, according to the staff. In addition, we found that the councils’ planned allocation review schedules may affect their workload and other priorities, but it is not clear to what extent. NMFS’ 2016 allocation guidance states that the councils’ allocation review processes should include consideration of current council priorities, other actions under deliberation, and available resources. NMFS officials and council staff expressed concern that the councils’ planned schedules—as identified in their April and July 2019 policies—may negatively affect the workloads and other priorities of NMFS’ social scientists, economists, and data analysts and council staff. For instance, staff from both councils said the planned allocation review schedules will increase their workloads and, depending on the nature and substance of how those reviews are conducted, could take resources away from other council activities and lead them to reprioritize or delay those activities. One council’s staff also noted that the council members have a difficult time keeping up with existing workloads. NMFS officials and council staff said that factors that may affect these types of costs include the complexity of the analyses, the number of NMFS or council staff involved in the process, and the degree of public interest. Fishery management plan amendments that establish or revise allocations can be controversial, and will likely have more public hearings and opportunity for public comment than other types of amendments, according to NMFS officials and council staff. NMFS officials and South Atlantic and Gulf of Mexico council staff said they have not tracked costs of establishing, reviewing, or revising allocations. The councils often make allocation decisions concurrently with other management actions, making it difficult to isolate costs. Further, NMFS officials stated the councils’ accelerated schedules as of December 2019, as shown in Table 5, will exacerbate the concerns. These schedules include starting reviews for 50 allocations in the South Atlantic between 2019 and 2026, assuming no conditions trigger earlier reviews, and reviews for 10 allocations in the Gulf of Mexico between 2019 and 2026. One NMFS official said that any additional workload for economists and social scientists in the Southeast Fisheries Science Center is difficult to anticipate because it will depend on the type of information the councils would like to use for the reviews and whether additional studies may be needed or data collected. Another NMFS official stated that the regional office will shift priorities from less important tasks and gain efficiencies where possible to accommodate the planned allocation reviews. Neither Council Has Developed a Process for How to Conduct or Document Allocation Reviews, Although the Gulf of Mexico Council Began Taking Steps to Develop One The South Atlantic and Gulf of Mexico councils have not developed processes for how they will conduct or document their allocation reviews to implement NMFS’ 2016 policy and related guidance, although the Gulf of Mexico council has begun taking steps to do so. As noted, NMFS policy calls for a multi-step process for reviewing and potentially revising fisheries allocations. Specifically, once an allocation review trigger has been met, NMFS policy calls for an allocation review, after which the councils may maintain existing allocations or evaluate allocation options through a fishery management plan amendment. NMFS guidance states that the councils should develop a structured and transparent process for conducting allocation reviews, including consideration of current council priorities, other actions under deliberation, and available resources. In April 2019, the Gulf of Mexico council began taking steps to develop an allocation review process, according to council documents. Specifically, the Gulf of Mexico council convened an allocation review workgroup consisting of staff from the council and from NMFS’ Southeast Regional Office and Southeast Fisheries Science Center. The council expects the workgroup to propose draft allocation review procedures, including identifying data sources that would be needed to conduct allocation reviews, according to a council document. The workgroup met in June and July 2019 and discussed these topics and other potential proposals, such as establishing a tiered system for allocation reviews that would involve different levels of analysis for different tiers of reviews, according to council documents. Council staff said the workgroup plans to next meet after the issuance of our report to finalize a proposal for developing an allocation review process for the council to consider. However, the council has not indicated what actions it will take, if any, regarding the workgroup’s proposal; instead, the council will determine its course of action after reviewing this report, according to council staff. The South Atlantic council postponed discussion of defining or documenting its allocation review process until March 2020, according to council staff and members, to review our report before deciding any next steps. At the council’s June 2019 meeting, the council chair questioned the need for developing an allocation review process through policy. For instance, the chair cited concerns that the council may be continuously developing exceptions to such a policy to accommodate fishery-specific issues or other unique circumstances. The chair also stated that aside from establishing criteria for initiating allocation reviews, NMFS’ guidance does not require the councils to take other actions related to developing allocation review processes. NMFS officials said that the agency’s 2016 guidance recommending that the councils develop a structured and transparent process was not intended to require the councils to develop a separate policy or documented process for conducting allocation reviews. NMFS officials said that the agency’s operational guidelines for processes under the Magnuson-Stevens Act and associated regional operating agreements with the councils lay out the key requirements and processes guiding development, review, and implementation of fishery management plans and plan amendments, which would include actions related to allocations. The officials further explained that in developing the 2016 allocation policy, they intended that allocation reviews be conducted through the processes identified in the agency’s operational guidelines and regional operating agreements with the councils, which allow the councils flexibility to factor in their own needs. However, the operational guidelines and regional operating agreements for the South Atlantic and Gulf of Mexico councils apply to the fishery management plan and amendment process overall, and they do not specifically address allocations. The goals of the operational guidelines include promoting a timely, effective, and transparent public process for development and implementation of fishery management measures, and the guidelines note that the regional operating agreements are meant to make council procedures and processes transparent. The guidelines and agreements, however, do not lay out processes the councils are to follow in reviewing allocations apart from developing fishery management plans or plan amendments. As noted in NMFS’ 2016 policy and guidance, the councils may conduct allocation reviews separate from the fishery management plan amendment process. Moreover, the regional operating agreements are not intended to limit or prevent the councils’ use of additional processes in response to specific management needs, according to these documents and the operational guidelines, and the Gulf of Mexico council has taken initial steps in developing an allocation review process as previously described. Based on the framework for internal controls established by the Committee of Sponsoring Organizations of the Treadway Commission, documented policies and processes can be more difficult to circumvent, less costly to an organization if there is turnover in personnel, and increase accountability. The framework also states that when subject to external party review, policies and processes would be expected to be formally documented. Among other things, documented processes— according to the framework—promote consistency; assist in communicating the who, what, when, where, and why of internal control execution; enable proper monitoring; and provide a means to retain organizational knowledge and mitigate the risk of having the knowledge within the minds of a limited number of individuals. The 2012 report commissioned by NMFS to review fisheries allocation issues found that allocation reviews had not been done in a regular, consistent manner and stated that this makes it harder for stakeholders to understand the reviews as well as the process for conducting them. Similarly, stakeholders we interviewed indicated that a clear process for conducting allocation reviews is needed and would increase their confidence in or understanding of the councils’ decisions, regardless of specific outcomes. Other stakeholders stressed the need for predictability and certainty to be able to plan critical business decisions, such as securing loans from local banks or other lenders. Such uncertainty may cause participants in the commercial sector to leave the fishery because they cannot secure loans or meet other business requirements, according to one stakeholder, or it may create instability that could affect the market price of fish, according to another stakeholder. By working with the councils to develop documented allocation review processes, NMFS would have better assurance that the councils carry out their upcoming allocation reviews in a structured and transparent manner, consistent with the agency’s 2016 guidance. Further, it is unclear whether or how the councils plan to document each allocation review, such as the basis for their allocation decisions, whether fishery management plan objectives are being met, and what factors were considered in each review. NMFS’ operational guidelines state that fishery management decisions must be supported by a record providing the basis for the decision. In addition, NMFS’ 2016 policy and guidance call for the councils to clearly articulate in their allocation reviews how fishery management plan objectives are or are not being met, as well as to document their rationale for determining whether any factors are unimportant or not applicable in making an allocation decision. NMFS officials and council staff said that any allocation revisions would be documented through fishery management plan amendments. However, the councils may conduct allocation reviews separate from the fishery management plan amendment process, and it is not clear whether or how the councils will document those reviews. For example, as previously noted, in the past the South Atlantic council has not formally documented the results of allocation reviews that did not lead to fishery management plan amendments that revised the allocations. By working with the councils to specify how they plan to document their allocation reviews, NMFS could help ensure that the councils provide a clear record of the basis for their decisions, whether fishery management plan objectives are being met, and applicable factors considered. Clear records could also help increase transparency and stakeholder understanding of the councils’ decisions, particularly in those instances when reviews are separate from the fishery management plan amendment process. Conclusions Making allocation decisions between the commercial and recreational fishing sectors can be complex and difficult, and the outcomes of those decisions may have important economic and social implications for stakeholders in each of the sectors. The South Atlantic and Gulf of Mexico councils have taken an important step in developing policies outlining criteria for initiating allocation reviews, in accordance with NMFS guidance. The Gulf of Mexico council has also taken initial steps to define how it will conduct its allocation reviews. However, neither council has developed a process for how they will conduct such reviews. By working with the councils to develop documented allocation review processes, NMFS would have better assurance that the councils carry out their upcoming allocation reviews in a structured and transparent manner, consistent with the agency’s 2016 guidance. Moreover, by working with the councils to also specify how they plan to document their allocation reviews, NMFS could help ensure that the councils provide a clear record of the basis for their decisions, whether fishery management plan objectives are being met, and applicable factors considered. Recommendations for Executive Action We are making the following two recommendations to the NMFS Assistant Administrator for Fisheries: The NMFS Assistant Administrator for Fisheries should work with the South Atlantic and Gulf of Mexico councils, and other councils as appropriate, to develop documented processes for conducting allocation reviews. (Recommendation 1) The NMFS Assistant Administrator for Fisheries should work with the South Atlantic and Gulf of Mexico councils, and other councils as appropriate, to specify how the councils will document their allocation reviews, including the basis for their allocation decisions, whether fishery management plan objectives are being met, and what factors were considered in the reviews. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Commerce for review and comment. In written comments (reproduced in app. II), Commerce and NOAA agreed with our recommendations and stated that NOAA’s NMFS will work to implement them to the extent possible. NOAA stated that the report accurately describes the extent to which the councils established and revised allocations for mixed-use fisheries, the key sources of information that may be available for reviewing allocations, and the extent to which the councils have developed processes to help guide such reviews. NOAA also highlighted the delicate balance that councils seek to achieve in deciding what fishery management approaches to implement to comply with the Magnuson-Stevens Act and its 10 national standards. In addition, Commerce and NOAA stated that NMFS does not have the legal authority to direct the councils to take the actions included in our two recommendations, stating that such actions are outside of legal requirements that guide council fishery management actions. In response, we revised the wording of our two recommendations to state that the NMFS Assistant Administrator for Fisheries should “work with,” rather than “direct,” the councils to take the recommended actions. In response to our first recommendation, NOAA stated that it would build on the recommendations in its allocation policy by working with the South Atlantic and Gulf of Mexico councils, and other councils as appropriate, to develop documented processes for conducting allocation reviews. In response to our second recommendation on specifying how the councils will document their allocation reviews, NOAA stated that it will work with the councils on consistent documentation of allocation reviews. NOAA noted that transparency in the allocation process improves with a documented process for conducting allocation reviews, and that consistent documentation of those reviews will create further transparency in the allocation process and could improve stakeholders’ understanding of the councils’ decisions. NOAA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Mixed-Use Fisheries Allocations in the South Atlantic and Gulf of Mexico Fishery Management Council Regions Tables 6 and 7 provide information on mixed-use fisheries allocations— privileges for catching fish between the commercial and recreational fishing sectors—in the South Atlantic and Gulf of Mexico Fishery Management Council (council) regions, respectively. Not all mixed-use fish stocks in these regions have allocations. In the South Atlantic council region, spiny lobster does not have an allocation. In the Gulf of Mexico council region, 14 of 23 mixed-use fish stocks do not have allocations. Appendix II: Comments from the Department of Commerce Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Anne-Marie Fennell, (202) 512-3841 or fennella@gao.gov. Staff Acknowledgments In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Krista Breen Anderson (Analyst in Charge), Leo Acosta, Mark Braza, Tim Guinane, Paul Kazemersky, Patricia Moye, Cynthia Norris, Dan C. Royer, Rebecca Sandulli, Kiki Theodoropoulos, and Khristi Wilkins made key contributions to this report.
Why GAO Did This Study Commercial and recreational marine fisheries—including those in the South Atlantic and Gulf of Mexico—are critical to the nation's economy, contributing approximately $99.5 billion to the U.S. gross domestic product in 2016, according to the Department of Commerce. NMFS and the councils may allocate fishing privileges for mixed-use fisheries in federal waters, but establishing and revising such allocations can be complex, in part because of concerns about equity. The Modernizing Recreational Fisheries Management Act of 2018 includes a provision for GAO to review mixed-use fisheries allocations in the South Atlantic and Gulf of Mexico. For these regions, this report examines (1) the extent to which the councils established or revised mixed-use fisheries allocations, (2) key sources of information that may be available for reviewing allocations, and (3) the extent to which the councils have developed processes to help guide such reviews. GAO reviewed NMFS and council policies and other council documents; analyzed information on allocations established and revised; compared council processes to agency guidance and internal control standards; and interviewed NMFS officials, council members and staff, and 46 stakeholders that reflected various interests. Views from these stakeholders are not generalizable. What GAO Found The South Atlantic and Gulf of Mexico regional fishery management councils, with approval from Department of Commerce's National Marine Fisheries Service (NMFS), established and revised allocations to varying degrees for mixed-use fish stocks—fisheries with a combination of commercial and recreational fishing. Regional councils were created by statute to help manage fisheries in federal waters, including allocating—or distributing—fishing privileges, when warranted. Starting in 1985, the South Atlantic council established allocations, generally a percentage of allowable harvest, for 50 of its 51 mixed-use fish stocks and revised most of those at least once. The Gulf of Mexico council established allocations for nine of its 23 mixed-use fish stocks, revising three of those once. Historically, allocations have been largely based on estimates of the commercial and recreational fishing sectors' past use of the resource, according to NMFS. Key sources of information that may be available to help NMFS and the councils review allocations include trends in catch and landings (the amount of fish caught or brought to shore); fish stock assessments; and economic analyses. Each source presents some challenges in supporting allocation decisions, however. For example, NMFS works with states to estimate recreational catch, which provides information about demand, but faces difficulties generating reliable estimates. This is in part because of attributes of the recreational fishing sector, including the greater number of recreational anglers compared with commercial fishing participants. NMFS issued guidance in 2019 to promote consistency in estimating recreational catch data to help improve the quality of the information. The South Atlantic and Gulf of Mexico councils developed processes for when to initiate fish stock allocation reviews, but not for how to conduct those reviews. A 2012 report for NMFS found that reviews had been done inconsistently, and stakeholders were dissatisfied with allocation decision-making. In response, NMFS developed guidance calling for structured and transparent allocation review processes. Both councils established criteria for initiating reviews, such as time-based triggers, and as of December 2019 they had several reviews underway (see figure). In April 2019, the Gulf of Mexico council began convening a workgroup to propose a draft allocation review process, but has not indicated what actions it will take, if any, in response to a proposal. The South Atlantic council postponed any discussions until March 2020. As of December 2019, neither council had a documented process. Documented processes for conducting allocation reviews would provide NMFS with better assurance that the councils carry out upcoming reviews in a structured and transparent manner. What GAO Recommends GAO is making two recommendations, including that NMFS work with the councils to develop documented processes for conducting allocation reviews. The agency agreed with GAO's recommendations.
gao_GAO-20-99
gao_GAO-20-99_0
Background U.S. Missions in Afghanistan The United States currently has two primary missions in Afghanistan: the U.S-led counterterrorism mission and the NATO-led Resolute Support mission to train, advise, and assist the ANDSF. For U.S. purposes, both of these missions are a part of Operation Freedom’s Sentinel, commanded by U.S. Forces-Afghanistan. Combined Security Transition Command-Afghanistan is the command under NATO’s Resolute Support mission that conducts the train, advise, and assist mission in Afghanistan. These efforts are carried out via the regional Train Advise Assist Commands (TAACs) that collectively cover all of Afghanistan. Specifically, Train Advise Assist Command–Air (TAAC-Air) focuses on developing and advising the Afghan Air Force. The Afghanistan Security Forces Fund The ASFF is generally a 2-year appropriation that is used to provide assistance, with the concurrence of the Secretary of State, to the security forces of Afghanistan, including the provision of equipment, supplies, services, training, facility and infrastructure repair, renovation, construction, and funding. The ASFF presently comprises four budget activity groups: Afghan National Army, Afghan National Police, Afghan Air Force, and Afghan Special Security Forces. Each budget activity group includes four sub-activity groups: sustainment, infrastructure, equipment and transportation, and training and operations. According to officials, the training and operations sub-activity group encompasses most of CSTC- A’s efforts to train the ANDSF, including the Afghan National Army. DOD Processes for Identifying Afghan National Army Training Needs and Associated Funding Requirements CSTC-A has established processes to identify capability gaps within the ANDSF, develop and select training needed to address those gaps, and identify associated funding requirements. To do so, CSTC-A works with various requiring activities—partner organizations, such as the Train Advise Assist Commands—to identify ANDSF training needs. CSTC-A then incorporates these needs and associated funding requirements into the ASFF budget request, typically a year or more before the training is initiated. CSTC-A Works with Its Partner Organizations to Identify ANDSF Capability Gaps and Training Needs CSTC-A has established processes to identify capability gaps within the ANDSF, develop and select training needed to address those gaps, and identify associated funding requirements for inclusion in ASFF budget justification documentation. To help execute these processes, CSTC-A has developed standard operating procedures and other guidance for planning, resourcing, and executing the ASFF. These procedures and other guidance include information on processes to validate training requirements and associated resources. CSTC-A works with various partner organizations—referred to as “requiring activities”—to identify capability gaps and training needs for the ANDSF. Requiring activities are the organizations that request the resourcing of ANDSF capability needs through ASFF. They include CSTC-A, the TAACs, and other U.S. or NATO organizations partnered with the ANDSF. According to DOD officials, a partner organization can identify capability gaps in a number of ways. For example, Train Advise Assist Command– Air, which develops and advises the Afghan Air Force, works with subject matter experts from the relevant U.S. military services and other organizations to identify potential Afghan Air Force capability gaps. Additionally, according to DOD officials, in 2015 DOD tasked the MITRE Corporation to conduct a study of Afghan Air Force capabilities. According to DOD officials, MITRE’s November 2015 study highlighted capability gaps within the cadre of Afghan Air Force fixed- and rotary-wing pilots and maintenance personnel. Further, officials stated that the study concluded that the training of additional pilots constituted a critical need for the Afghan Air Force. Once a capability gap has been identified, the requiring activity develops potential courses of action to address it, such as proposals to train the ANDSF to develop needed capabilities. Through CSTC-A’s procedures these proposals are validated, along with associated resources. The validation process is intended to ensure that a transparent and accountable process is followed when allocating ASFF resources to emerging requirements. For example, as part of the fiscal year 2018 budget process, TAAC-Air identified a capability gap within the Afghan Air Force and then worked with various subject matter experts to develop courses of action to address the gap. Specifically, TAAC-Air worked with personnel from the Program Executive Office for Simulation, Training, and Instrumentation (PEO-STRI), which provides simulation, training, and testing solutions for the Army and joint community. Subject matter experts from PEO-STRI provided details regarding various options for addressing the capability gap. PEO-STRI officials noted that they also provided cost estimates for delivering the solution based on historical data. According to PEO-STRI officials, this was a highly interactive process entailing frequent formal and informal discussions among multiple organizations to develop the most effective solution for pilot training for the Afghan Air Force. Once details and cost estimates were solidified, the requirement owner presented them to a Council of Colonels, an officer group responsible for requirement validation for training needs, among other capability needs. The requirement was then taken to the General Officer Steering Committee, which votes to validate the requirement and approve the proposed solution. CSTC-A Process Incorporates Validated Training Needs into ASFF Budget Request CSTC-A’s process incorporates validated training needs and their associated funding requirements as part of DOD’s annual budget process. DOD’s planning, programming, budgeting, and execution (PPBE) process, which is governed in part by DOD Directive 7045.14, along with other DOD guidance, is conducted under four phases (see figure 1). Specifically, DOD uses the PPBE process to determine and prioritize requirements and allocate resources to provide capabilities necessary to accomplish the department’s missions. According to officials, as part of this process, CSTC-A provides inputs, including training requirements and associated funding needs, and later works with various contracting commands to execute appropriated funds. In the case of ASFF, CSTC-A’s guidance indicates that a proposed activity (for example, fixed-wing pilot training classes) should generally be included in the ASFF budget justification book in order to later use ASFF funds for that activity. To do so, CSTC-A’s Program and Analysis Division develops and incorporates the requests from requirement owners for funding for the operations, sustainment, and development of the ANDSF into the ASFF budget request and associated budget justification materials. The Program and Analysis Division works with the requirement owners to write a narrative describing their proposed activity and associated cost estimate for delivering the activity. The division then works with the OUSD-Comptroller to consolidate requirements for all budget activities and sub-activity groups into a single draft budget justification book. One significant aspect of this process is that many of the key decisions, and associated cost assumptions, on how CSTC-A and TAAC-Air (in the case of Afghan pilot training) intend to carry out ASFF training efforts are proposed 18-24 months before the training will occur. For example, as shown in figure 2, preparation of the ASFF budget justification book for fiscal year 2019 began in the summer of 2017. In turn, the budget justification book was subsequently submitted to the OUSD-Comptroller in December 2017, and funds were not available for use until the start of the new fiscal year, in October 2018. These time frames can present a challenge in developing accurate cost estimates for CSTC-A, given that situations in Afghanistan can change significantly in the time between CSTC-A’s developing a proposed capability requirement and associated cost estimate for inclusion in the ASFF budget justification book and the execution of that requirement, according to officials. If conditions change, officials noted, the proposed actions and associated cost estimates for a given requirement may no longer be appropriate or accurate. For example, the Special Inspector General for Afghanistan Reconstruction reported in January 2019 that CTSC-A may have overestimated the cost for UH-60 Blackhawk rotary- wing pilot training by as much as $1 billion over a 7-year period— attributing the overestimation mainly to unrealistic assumptions regarding student or pilot attrition and the English language program. In the case of initial entry fixed-wing pilot training classes, CSTC-A’s original proposal, as reflected in its budget justification book was to have classes of 25 students. However, during the implementation of this training, the class size fell to 12 students because not all 25 students achieved the required English language proficiency, and one student had dropped out of the program. Consequently, the resulting class was half the projected size underlying the estimated funding requirement, which resulted in funds being excess to CSTC-A’s actual need. CSTC-A officials acknowledged the challenges they faced in filling classes with the expected number of students, adding that they had purposely built in significant flexibility in the training approach to be able to adjust to the realities of the ANDSF’s ability to generate qualified students. According to CSTC-A officials, the number of English-proficient Afghan student candidates varies from year to year. For cases like these, where CSTC-A requested more funding than it ultimately obligated, in some circumstances DOD may reprogram the unobligated amounts within the same appropriation account, or may transfer it to other appropriation accounts, if there is authority to do so. Otherwise, time-limited appropriations, such as the ASFF, expire after their period of availability and are unavailable for new obligations. According to CSTC-A officials, in cases where they have unobligated funding due to changing conditions such as smaller-than-expected class sizes, they try to reprogram that money for related needs within the same sub-activity group in the ASFF budget prior to expiration. For example, if certain Afghan Air Force training costs are lower than expected, the money could be reprogrammed for other efforts within the Afghan Air Force training and operations sub-activity group. CSTC-A’s Process for Developing and Overseeing ASFF Training Contracts ASFF-funded training contracts for the ANDSF are developed and executed through a process that is modeled on the U.S. government’s foreign military sales process. Until April 2019, ASFF-funded orders to train the Afghan National Army were generally filled under a contract with a single provider. At that point, DOD began to transition to an approach using several contracts, including one with multiple providers. ASFF-Funded Training Contracts Are Developed and Executed Under a Process Modeled on the Foreign Military Sales Program ASFF-funded training contracts are developed and executed under a process modeled on the U.S. government’s foreign military sales (FMS) program, referred to as “pseudo-FMS.” As indicated by CSTC-A guidance, these pseudo-FMS procurements are FMS-like cases and use U.S. funds to purchase items, services, and training for ANDSF capability requirements. The process is outlined in the Security Assistance Management Manual, which provides DOD-wide guidance to DOD components engaged in the management or implementation of DOD security assistance and security cooperation programs over which the Defense Security Cooperation Agency has responsibility. We have previously reported that while the many steps of the process used for FMS and pseudo-FMS cases can be grouped in different ways, they fall into five general phases: assistance request, agreement development, acquisition, delivery, and case closure. First, CSTC-A works with the resource coordinator, requirement owner, and other elements to develop a Memorandum of Request, and it submits that memorandum to the implementing agency and the Defense Security Cooperation Agency, requesting assistance to contract for ANDSF needs using ASFF funds. For example, when developing the Memorandum of Request for initial entry fixed-wing pilot training, CSTC-A worked with TAAC-Air, the requirement owner, to identify details regarding the agreed- upon training solution. Officials noted that CSTC-A also worked with the subject matter experts from PEO-STRI to develop the independent government cost estimate. Second, as described by officials, the agreement development phase begins with the Defense Security Cooperation Agency’s receiving the Memorandum of Request. The Defense Security Cooperation Agency opens a case and assigns it to an implementing agency—that is, the military department or defense agency responsible for overall management of the actions that will result in the delivery of materials or services. According to contracting officials, the implementing agency for training foreign military ground and air forces outside of the United States—such as the Afghan National Army—is the U.S. Army Security Assistance Command. The implementing agency then works with the appropriate Program Executive Office to develop the Letter of Offer and Acceptance—which serves to document the transfer of articles and services to the U.S. government requesting authority. For example, for the out-of-country fixed-wing pilot training requirement, contractors delivered the training, and the appropriate implementing agency was PEO-STRI, according to officials. Once the Letter of Offer and Acceptance is completed and signed by the implementing and requesting agencies, it is reviewed and approved by the Defense Security Cooperation Agency and Department of State, as appropriate. Third, the Program Executive Office works with the appropriate contracting command to acquire the requested defense goods or services as part of the acquisition phase. According to contracting officials, the contracting command solicits and receives bids from contractors and selects the best value option (including price plus deliverables). Fourth, the contractor delivers the required good or service. According to officials, the relevant Program Executive Office is responsible for monitoring the contractor’s performance by ensuring compliance with applicable contract clauses. Fifth, following contract completion and payment of outstanding obligations, the implementing agency initiates case closure with the Defense Security Cooperation Agency. Training Requirements for the Afghan National Army Were Generally Provided by a Single Vendor Prior to April 2019, but Are Now Provided by Multiple Vendors Prior to April 2019, ASFF-funded training requirements for the Afghan National Army, including out-of-country fixed- wing pilot training, were generally executed under a single award indefinite delivery, indefinite quantity contract known as the Warfighter Field Operations Customer Support (WFF) contract. The WFF contract provided integrated training system sustainment and training services world-wide for the U.S. Army, Marine Corps, Navy, Air Force, and Special Operations Command. According to Army contracting officials, WFF was the most expedient way to contract for various types of training for the Afghan National Army due to the contract’s broad scope and $11.2 billion ceiling. These officials said it provided the capacity and flexibility needed to fulfill the Afghan National Army’s requirements and time frames in a streamlined way because the competition and award process had already occurred, enabling officials to move directly to awarding task orders for support. However, while the single award indefinite delivery, indefinite quantity contract streamlined the process for contracting ANDSF training, it limited DOD’s ability to negotiate some costs. According to contracting officials, only certain types of costs could be negotiated, such as those associated with housing, travel, and the number of advisors supporting the training. The officials stated that other costs were established as a per-unit cost at the time of the contract award. In addition, various administrative fees were established when the WFF contract was awarded in 2007 and could not be renegotiated, according to contracting officials. As a result, any task orders under this contract, including those to train the Afghan National Army, had to include these administrative fees and established labor wages. To illustrate the various costs associated with the Afghan Air Force training program, we reviewed documentation associated with training provided under the WFF contract. One training program cost $12.1 million for the delivery of an 86-week fixed-wing pilot training course (from February 2018 through September 2019) for 13 Afghan Air Force students at the Fujairah Aviation Academy in the United Arab Emirates. The pilot training was conducted by contractors and comprised aviation English language training, theory of flight, basic and advanced instrument ground school, advanced flight instrumentation, and simulation training for the Afghan Air Force Cessna C-208 Caravan aircraft. The $12.1 million total included amounts paid to the contractor and administrative charges to cover the costs of entities within the U.S. government. The costs associated with the training are shown in figure 3 below. The largest cost factor in this task order was the cost of the flight school itself, which accounted for 68.4 percent (or $8.2 million) of the total cost, according to contracting officials. The flight school included ground school, simulation, advanced instruments, and flying hours training, and it represented a cost per each of the 13 students who actually attended the training. The flight school also included the cost of housing, electronic books / manuals, and campus security, some of which costs were negotiable, according to officials. Other costs, such as the Defense Security Cooperation Agency 3.5 percent surcharge and contract administration services 1.2 percent surcharge, were established based upon rates current at the time of the letter of offer and acceptance. According to officials, the contractor’s profit was established at the time of award of the contract in 2007. Officials stated that the costs that could be negotiated were limited and included costs associated with travel, lodging, and adding more advisors to augment the training. According to contracting officials, these limitations were not unique to this ASFF training but applied broadly to all ASFF training task orders they executed under WFF. In 2018 DOD decided to replace WFF, which was nearing expiration, with a series of new contracts. DOD has begun to transition work previously performed under WFF to these new contracts, the first of which was awarded in 2018. According to contracting officials, ASFF-funded training efforts are expected to be executed primarily under two of the new contracts – the Enterprise Training Services Contract and the Training, Instructor Operator Support Services Contract. The Enterprise Training Services Contract is a multiple award indefinite delivery, indefinite quantity contract with a total contract ceiling of $2.4 billion that was awarded to multiple contractors in June 2018. According to officials, the Training, Instructor Operator Support Services Contract is a single award indefinite delivery, indefinite quantity contract with a ceiling of $197.6 million that was awarded in July 2018. According to Army contracting officials, the contracting process for ASFF training services will include competition among multiple contractors for each task order under the Enterprise Training Services Contract. Army contracting officials stated that under a multiple-award contract, each contract holder is to be provided a fair opportunity to compete for each task order, in part to use competition to ensure that the proposed prices are fair and reasonable. According to Army contracting officials, the Enterprise Training Services Contract also affords the opportunity to negotiate more elements than previously under the WFF contract, such as labor rates or travel costs associated with training. The first training task order under the Enterprise Training Services Contract in support of Afghan forces was issued in April 2019. As this task order has only recently been issued, it is too early for us to comment on the efficacy of these contracts. DOD Processes to Provide Visibility over ASFF-Funded Training Contracts DOD has varying degrees of visibility over ASFF-funded training contracts. At the broadest level, OUSD-Comptroller and contracting officials stated that they have visibility of the overall execution of the ASFF budget, including funding associated with Afghan National Army training. For example, OUSD-Comptroller tracks and reports ASFF obligations and disbursements in monthly status-of-funds reports, known as Defense Financial and Accounting Services 1002 Reports. In addition, the Special Inspector General for Afghanistan Reconstruction tracks and reports ASFF obligations and disbursements via its Overseas Contingency Operations quarterly reports to Congress. At the individual contract level, the military services’ contracting commands, such as PEO-STRI and Army Contracting Command, develop and maintain contract files for individual ASFF-funded contracts and task orders. However, according to officials, DOD does not have a centralized system or reporting mechanism for tracking all ASFF training contracts, because the systems used by the services for managing funding and those used for contract management do not interface with each other. According to OUSD-Comptroller officials, the systems used for financial management were not designed or intended to identify ASFF funds specifically obligated for training contracts because there is no requirement for them to do so. Officials said that consequently, in the single instance in which they have had to develop a comprehensive list of all ASFF-funded training contracts, they had to work with the contracting commands at the respective military services to gather this information. For example, to respond to congressional direction related to contracts funded with ASFF, OUSD-Policy contacted all of the military services to request a list of all training contracts funded through the ASFF under the respective services’ responsibilities, according to OUSD officials. In turn, Army contracting officials stated that they identified the requested information by using the lines of accounting fields in their contract management systems to identify those training contracts funded with ASFF. OUSD-Policy officials provided us with the resulting list of 40 contracts and task orders, totaling over $483 million in estimated contract value, but they acknowledged that the list was likely incomplete. OUSD-Policy officials who compiled the list of training contracts told us that the precision of the list was affected by inconsistent interpretations among the services of what constitutes a training contract. According to these officials, training for the Afghan National Army can also occur under procurement or maintenance contracts that have embedded training components. For example, according to officials, the Army’s National Maintenance Strategy contract provides logistic support to the Afghan National Army and includes a training component. Similarly, the Navy’s ASFF-funded ScanEagle unmanned aerial vehicle reconnaissance procurement contract includes a training component. Because these contracts are not primarily training-oriented, according to contracting officials, they were not identified under the training and operations subactivity group in the ASFF budget, and therefore would not be easily identifiable as ASFF training contracts. Despite these limitations, DOD officials stated that, given their existing systems and processes and their ability to reach out to contracting officials to obtain additional data when needed, they believe they have sufficient tools to identify most ASFF- funded training contracts. Additionally, DOD officials stated that the congressional direction associated with ASFF-funded training was a one- time request, not a recurring task. Agency Comments We provided a draft of this report to DOD, and DOD responded that it would not be providing formal comments. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Defense. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Cary Russell at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in the appendix. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, James A. Reynolds, Assistant Director, and Jerome Brown, William Chatlos, Alfonso Garcia, Steve Pruitt, Michael Shaughnessy, McKenna Stahl, and Cheryl Weissman made key contributions to this report.
Why GAO Did This Study The United States has made a commitment to building Afghanistan's security and governance structure in order to counter terrorist threats and create sustainable security and stability in Afghanistan. Since 2005 Congress has appropriated more than $78.8 billion for the ASFF to build, equip, train, and sustain the Afghan National Defense and Security Forces. Over that period, nearly $4.3 billion has been expended to support the training and operations of the Afghan National Army. Training requirements are primarily fulfilled through contracts. In recent years, concerns have been raised in Congress about the high costs of some of these training contracts. The Joint Explanatory Statement accompanying the Consolidated Appropriations Act, 2018, included a provision for GAO to examine the ASFF training contracts. This report describes DOD's processes to (1) identify Afghan National Army training needs and associated funding requirements; (2) develop and execute ASFF training contracts; and (3) provide visibility over ASFF training contracts. GAO reviewed DOD guidance for identifying and executing training needs, and interviewed DOD officials. GAO also reviewed documentation associated with task orders issued against an indefinite delivery, indefinite quantity contract for training completed in fiscal years 2017 through 2019 for the Afghan National Army. What GAO Found Combined Security Transition Command-Afghanistan (CSTC-A) has established processes to identify capability gaps within the Afghan National Defense and Security Forces (ANDSF), develop and select training needed to address those gaps, and identify associated funding requirements. CSTC-A generally includes these requirements in the Afghanistan Security Forces Fund (ASFF) budget justification book. Many of the key decisions and associated cost assumptions on how CSTC-A and Train Advise Assist Command–Air (in the case of Afghan pilot training) intend to carry out ASFF training efforts are proposed 18-24 months before the training will occur (see figure). ASFF-funded training contracts are developed and executed under a process modeled on the U.S. government's foreign military sales program. Prior to April 2019, most ASFF-funded training requirements were filled under a single-award indefinite delivery, indefinite quantity (IDIQ) contract that supported a wide range of DOD training needs. An IDIQ contract provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period. The government places orders for individual requirements. According to an Army official, that contract's broad scope and high contract value ceiling made it a highly expedient way to contract for various types of training for the ANDSF. However, contracting officals stated that using a single-award contract limited DOD's ability to negotiate some costs. At that point, DOD began to transition to an approach using several contracts, including one with multiple providers. Given that DOD executed its first task order under these new contracts in April 2019, it is too early for GAO to comment on the efficacy of this new approach. DOD has varying degrees of visibility over ASFF-funded contracts. DOD officials stated that they have visibiliity at the broadest level of the overall execution of the ASFF budget, including funding associated with Afghan National Army training. At the individual contract level, the military services' contracting commands maintain contract files, but the services' systems do not interface with one another. According to DOD officials, although DOD can obtain visibility over ASFF training contracts in the aggregate, the department must work with the contracting commands at the respective military services to gather information specific to training contracts.
gao_GAO-19-545
gao_GAO-19-545_0
Background IT systems supporting federal agencies are inherently at risk. These systems are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the numerous operating systems, applications, and devices comprising federal systems and networks. Compounding these risks, federal systems and networks are often interconnected with other internal and external systems and networks, including the internet, thereby increasing the number of avenues of attack and expanding their potential attack surface. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud and identity theft, disrupt operations, or launch attacks against other computer systems and networks. Cyber-based threats to information systems can come from sources internal and external to the organization. Internal threats include errors or mistakes, as well as fraudulent or malevolent acts by employees or contractors working within the organization. External threats include the ever-growing number of cyber-based attacks that can come from a variety of sources such as individuals, groups, and countries that wish to do harm to an organization’s systems. Yet, IT systems are often riddled with security vulnerabilities—both known and unknown. These vulnerabilities can facilitate security incidents and cyberattacks that disrupt critical operations; lead to inappropriate access to and disclosure, modification, or destruction of sensitive information; and threaten national security, economic well-being, and public health and safety. Federal Agencies Continue to Report Large Numbers of Incidents Until fiscal year 2016, the number of information security incidents reported by federal agencies to DHS’s United States Computer Emergency Readiness Team (US-CERT) had steadily increased each year. From fiscal year 2009 through fiscal year 2015, reported incidents increased from 29,999 to 77,183, an increase of 157 percent. Changes to federal incident reporting guidelines for 2016 contributed to the decrease in reported incidents in fiscal year 2016. Specifically, updated incident reporting guidelines that became effective in fiscal year 2016 no longer required agencies to report non-cyber incidents or incidents categorized as scans, probes, and attempted access. More recently, agencies reported 35,277 incidents in fiscal year 2017 and 31,107 incidents in fiscal year 2018, as reflected in figure 1. According to US-CERT incident report data, the incidents reported in fiscal year 2018 involved several threat vectors. These threat vectors include web-based attacks, phishing attacks, and the loss or theft of computer equipment, among others. Figure 2 provides a breakdown of information security incidents by threat vector in fiscal year 2018. These incidents and others like them can pose a serious challenge to national security, economic well-being, and public health and safety, as shown by two incidents reported in fiscal year 2018: In March 2018, the Department of Justice reported that it had indicted nine Iranians for conducting a massive cybersecurity theft campaign on behalf of the Islamic Revolutionary Guard Corps. According to the department, the Iranians allegedly stole more than 31 terabytes of documents and data from more than 140 American universities, 30 U.S. companies, and five federal government agencies, among other entities. In March 2018, a joint alert from DHS and the Federal Bureau of Investigation stated that, since at least March 2016, Russian government actors had targeted U.S. government entities and critical infrastructure sectors, including the energy, nuclear, water, aviation, and critical manufacturing sectors. FISMA Sets Requirements for Effectively Securing Federal Systems and Information Congress enacted FISMA 2014 to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets and to clarify government-wide responsibilities. The act addresses the increasing sophistication of cybersecurity attacks, promotes the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provides for improved oversight of federal agencies’ information security programs. FISMA requires agencies to develop, document, and implement an agency-wide information security program to secure federal information systems. These information security programs are to provide risk-based protections for the information and information systems that support the operations and assets of the agency. FISMA requires agencies to comply with OMB policies and procedures, DHS binding operational directives, and NIST federal information standards and guidelines. In addition, FISMA assigns to agency inspectors general responsibility for annually assessing the effectiveness of the information security policies, procedures, and practices of the agency. FISMA directs OMB to oversee agencies’ information security policies and practices. Among other things, FISMA requires OMB to develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. The law also assigns OMB the responsibility of requiring agencies to identify and provide information security protections commensurate with assessments of risk to their information and information systems. In addition, FISMA 2014 clarified and expanded DHS’s responsibilities for government-wide information security. Specifically, the act requires DHS, in consultation with OMB, to administer the implementation of agency information security policies and practices for non-national security information systems by: (1) assisting OMB with carrying out its oversight responsibilities; (2) developing, issuing, and overseeing implementation of binding operational directives; and (3) providing operational and technical assistance. Further, FISMA 2002 assigned to NIST the responsibility for developing standards and guidelines that include minimum information security requirements. FISMA also includes reporting requirements. Specifically, OMB is to report annually, in consultation with DHS, on the effectiveness of agency information security policies and practices, including a summary of major agency information security incidents and an assessment of agency compliance with NIST standards. Further, the law requires agencies to report annually to OMB, DHS, certain congressional committees, and the Comptroller General on the adequacy and effectiveness of their information security policies, procedures, and practices, including a description of each major security incident. Federal Agencies Are Required to Use the Cybersecurity Framework to Manage Risk and to Report on FISMA Implementation In May 2017, the President signed Executive Order 13800, which sets policy for managing cybersecurity risk as an executive branch enterprise. Specifically, the order outlines actions to be taken by federal agencies and critical infrastructure sectors to improve the nation’s cybersecurity posture and capabilities. To this end, the order states that the President will hold executive agency heads accountable for managing agency-wide cybersecurity risk and directs each executive branch agency to use the NIST cybersecurity framework to manage those risks. In addition to requirements set in the executive order, OMB’s reporting metrics that were developed to facilitate agencies’ compliance with FISMA’s reporting requirement are aligned to the core functions outlined in the cybersecurity framework. Consequently, agencies are required to report on the effectiveness of their information security policies and practices according to the cybersecurity framework’s core functions. NIST Framework’s Five Core Functions Are Aimed at Managing Cybersecurity Risk The NIST cybersecurity framework is based on five core security functions: Identify: Develop an understanding of the organization’s ability to manage cybersecurity risk to systems, people, assets, data, and capabilities. Protect: Develop and implement appropriate safeguards to ensure delivery of critical services. Detect: Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. Respond: Develop and implement appropriate activities to take action regarding a detected cybersecurity incident. Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore capabilities or services that were impaired due to a cybersecurity incident. According to NIST, these five functions should be performed concurrently and continuously to address cybersecurity risk. In addition, when considered together, they provide a high-level, strategic view of the life cycle of an organization’s management of cybersecurity risk. Within the five functions, NIST identifies 23 categories and 108 subcategories of activities and controls for achieving the intent of each function. Appendix II provides a description of the cybersecurity framework categories and subcategories of activities and controls. Inspectors General Are to Measure the Effectiveness of Agencies’ Information Security Programs Using the Cybersecurity Framework Core Functions The Council of Inspectors General for Integrity and Efficiency (CIGIE), in collaboration with OMB, DHS, and other stakeholders, developed a capability maturity model for agency inspectors general to assess and report on the effectiveness of their agencies’ information security programs. As described in table 1, the model identifies five maturity levels with each succeeding level representing a more advanced level of implementation. Using the five-level maturity model described above, the inspectors general are to assign a maturity-level rating for each of the five core security functions based on an assessment of their agencies’ implementation of the activities and controls associated with each function using metrics that CIGIE developed in collaboration with OMB. The inspectors general then consider the maturity level ratings of the core security functions to evaluate the overall effectiveness of their agency’s information security program. OMB instructs inspectors general to rate their agency’s information security program as effective or not effective by applying a rule of simple majority. Specifically, if three or more of the five core security functions are rated effective, the overall information security program is considered to be effective. According to this maturity model, Level 4 (managed and measurable) is the lowest level to represent an effective level of security. Therefore, if an inspector general rates three or more of the agency’s core security functions at Level 4 or Level 5, then the inspector general can consider that agency to have an effective information security program. However, the inspector general has the discretion to have a different conclusion on program effectiveness if he or she deems it appropriate to do so. CIOs Are Required to Assess Agencies’ Progress in Implementing Capabilities Related to the Administration’s Cybersecurity-related Cross- Agency Priority Goal Similar to the inspector general FISMA reporting metrics, OMB and DHS worked with interagency partners to develop the CIO FISMA metrics, which are intended to be used by the agencies, OMB, and DHS to track agencies’ progress in implementing cybersecurity capabilities. The CIO FISMA reporting metrics are organized around the five core security functions outlined in NIST’s cybersecurity framework. In addition, certain CIO FISMA reporting metrics represent key milestones of the administration’s IT Modernization Cross-Agency Priority (CAP) goal, which includes a cybersecurity initiative. As a result, the CIO reporting metrics allow agency CIOs, OMB and DHS to monitor progress toward meeting key milestones and targets for the CAP goal. The cybersecurity initiative within the IT Modernization CAP goal is designed to reduce cybersecurity risks to the federal government’s information systems by mitigating the impact of risks to federal data, systems, and networks. The initiative consists of three strategies that contain 10 milestones that relate to key areas within the CIO FISMA metrics—information security continuous monitoring; identity, credential, and access management; and advanced network and data protections. In addition, each of the 10 milestones has an expected level of performance, or target, for implementation, as described later in this report. Reported Information Security Spending Varies Among the 23 Civilian CFO Act Agencies Each year, OMB requires agencies to report how much they spend on information security. In fiscal year 2018, the 23 civilian agencies covered by the CFO Act reported spending between $9 million and almost $1.9 billion on cybersecurity- or IT security-related activities. For these 23 agencies, their total reported security spending accounted for about 14 percent of their IT spending, with percentages for individual agencies ranging from 5 percent to 208 percent, as seen in table 2. Security Control Deficiencies Reported at Selected Agencies Indicate Ineffective Information Security Policies and Practices Information security reports issued by GAO, inspectors general, and CIOs indicate that information security policies and practices of the agencies we reviewed are ineffective. Specifically, information security evaluation reports that we and agency inspectors general issued during fiscal year 2018 showed that most of the 16 selected agencies did not consistently or effectively implement policies or practices related to the core security functions of the cybersecurity framework. In addition, most of these selected agencies had deficiencies in implementing the eight elements of an information security program, as defined by FISMA. Also, inspectors general reported that most of the 24 CFO Act agencies did not have effective information security programs and were not effectively implementing security controls over financial systems during fiscal year 2018. Further, agency CIOs reported that most of the 23 civilian CFO Act agencies had not met targets for implementing cyber capabilities to reduce risk. Most of the 16 Selected Agencies Exhibited Deficiencies in All Cybersecurity Framework Core Security Functions FISMA requires agencies and their inspectors general to report on the adequacy and effectiveness of information security policies, procedures, and practices. To facilitate meeting this reporting requirement, CIGIE, in collaboration with OMB and DHS, developed metrics that agency inspectors general are to use to report on eight security domains that align with the five core security functions—Identify, Protect, Detect, Respond, and Recover—of the NIST cybersecurity framework. Table 3 illustrates how the inspector general reporting domains are related to the core security functions. Most of the 16 agencies that we reviewed had deficiencies in implementing policies and practices related to the cybersecurity framework core security functions and related domains during fiscal year 2018. Figure 3 shows the number of agencies with reported deficiencies in each of the framework’s core security functions. The Identify core security function includes the key process of risk management. NIST defines risk management as the process of identifying and assessing risk, and taking steps to reduce those risks to an acceptable level. NIST guidance specifies activities that agencies should implement to effectively identify and manage cybersecurity risks, including: establishing a risk management strategy that includes a determination identifying assets that require protection; assessing risk; and documenting plans of action and milestones (POA&Ms) to mitigate known deficiencies. Fifteen of the 16 selected agencies had deficiencies in activities associated with identifying risks. Figure 4 illustrates the number of selected agencies that had deficiencies in each of the activities. Establishment of a Risk Management Strategy Risk management strategies include strategic-level decisions and considerations for how senior leaders and executives are to manage risk to organizational operations and assets, individuals, other organizations, and the nation. GAO and inspectors general reports identified that 10 of the 16 selected agencies had deficiencies in developing, documenting, or implementing a risk management strategy. Specifically, nine of the 10 agencies had not developed or documented an enterprise-wide risk management strategy or process. Another agency had developed an enterprise risk management strategy but had not implemented it consistently across the agency. Without developing or documenting a risk management strategy, agencies lack clear guidance to help them make informed decisions for managing risk. Further, if agencies do not consistently implement a risk management strategy, they can potentially hinder their efforts to effectively identify and manage risk. FISMA requires agencies to develop and maintain an inventory of major information systems operated by or under the control of the agency to support risk management activities. Further, NIST Special Publication 800-53 states that centralized inventories of hardware, software, and firmware assets should be maintained to ensure proper accountability of those assets. These inventories also should be current, complete, and accurate to ensure proper accountability. Twelve of the 16 selected agencies did not fully identify or account for their major information systems or information technology assets. One agency did not maintain a comprehensive and accurate inventory of information systems and two other agencies did not maintain a current inventory of hardware and software assets. Nine additional agencies maintained neither a comprehensive and accurate inventory of information systems nor a current inventory of software and hardware assets. If agencies do not maintain comprehensive, accurate, or up-to- date inventories of information systems or hardware and software assets, agencies cannot ensure the protection of all assets within their networks. FISMA requires agencies to develop, document, and implement an agency-wide information security program that includes periodic risk assessments. According to NIST, these assessments are to address potential adverse impacts resulting from the operation and use of information systems and the information those systems process, store and transmit. Eight of the 16 selected agencies exhibited deficiencies in conducting risk assessments. Of the eight agencies that had deficiencies, four did not consistently perform risk assessments of their information systems; three did not fully update risk assessments subsequent to system changes; and one did not conduct a risk assessment supporting the agency’s decision to allocate resources to support mission and business processes. Without a sufficient process for conducting periodic risk assessments, agencies cannot determine, or appropriately respond to, risks to the information systems supporting the organization. Documentation of Plans of Action and Milestones FISMA requires agency information security programs to include a process for planning, implementing, evaluating, and documenting remedial action to address deficiencies in information system policies, procedures, and practices. In addition, NIST’s risk management framework states that agencies should implement a consistent process for developing POA&Ms using a prioritized approach to risk mitigation that is guided by a risk assessment. Further, documentation of POA&Ms should also be updated to reflect the current status of the deficiencies and, after remedial actions have been completed, agencies should test the actions to determine if they effectively addressed the deficiencies. Thirteen of the 16 selected agencies had deficiencies in their POA&M processes. Specifically, five agencies did not have an effective process for remediating vulnerabilities in a timely manner; seven other agencies did not adequately document or track the status of POA&Ms; and another agency did not assess the root cause of identified deficiencies to prioritize corrective actions based on the highest areas of risks. Additionally, one of the agencies that did not adequately document POA&Ms also did not have sufficient evidence to conclude that deficiencies were corrected even though the agency validated the remediation of the deficiency through its closure verification process. Without sufficiently documenting POA&Ms, agencies may not sufficiently remediate information security deficiencies in a timely manner, exposing their systems to increased risks that nefarious actors will exploit the deficiencies to gain unauthorized access to information resources. All Selected Agencies Had Deficiencies in Developing and Implementing Appropriate Safeguards to Protect Cyber Assets Agencies are to implement appropriate safeguards associated with the following four security domains that align with the Protect core security function: identity and access management; data protection and privacy; and security training. Each of the 16 selected agencies was deficient in developing and implementing appropriate safeguards to protect agency systems and networks. As shown in figure 5, most of the selected agencies had deficiencies in each of the four domains. NIST guidelines specify that agencies are to develop, implement, and maintain a baseline configuration; control changes to system configurations; and securely configure information systems. However, 14 of the selected 16 agencies reported weaknesses in one or more of these configuration management activities. Of the 14 agencies, nine had weaknesses in developing, maintaining, and implementing a baseline configuration for their information systems. For example, four agencies did not develop a baseline configuration for all systems or network devices. In addition, two agencies did not review or approve their baseline configurations. Further, three agencies did not consistently implement their baseline configurations. If agencies do not develop, maintain, or implement a current and comprehensive baseline of information systems and network devices, agencies cannot validate configuration information for accuracy, thereby hindering them from controlling changes made to a system. Eleven agencies did not effectively or consistently control changes to the configuration of their information systems. Properly controlling system changes can help agencies to ensure that changes are formally identified, proposed, reviewed, analyzed for security impact, tested, and approved prior to implementation. However, six of the 11 agencies did not properly approve or test changes before they were implemented; four other agencies did not consistently implement change control activities across their organization or their information systems; and one other agency did not consistently ensure accountability and responsibility for individuals performing configuration management activities. In addition, 12 agencies did not securely configure their information systems. NIST specifies that agencies should apply software patches in a timely manner, use vendor-supported software, apply secure configuration settings, and limit system functionality to least level needed to meet organizational requirements. However, of the 12 agencies that had deficiencies in implementing secure configurations, nine did not implement patches to address vulnerabilities or use up-to-date software that was supported by a vendor. Ten agencies also did not apply secure configuration settings to effectively enable security and facilitate the management of risk, while two agencies did not implement controls for limiting system functionality. As a result, these agencies cannot validate configuration information for their information systems and assets, detect or prevent unauthorized changes to information system resources, or provide a reasonable assurance that systems are configured and operating securely and as intended. Access controls are intended to limit or detect inappropriate access to computer resources to protect them from unauthorized modification, loss, and disclosure. Such controls include logical controls that require users to validate their identity and limit the files and other resources that those validated users can access and the actions they can execute. All 16 agencies that we reviewed had deficiencies in effectively implementing one or more controls associated with the identity and access management domain during fiscal year 2018. Fifteen of the 16 selected agencies did not adequately control user’s access to information systems and the information residing on them. For example, seven agencies did not appropriately authorize or approve system access before access was granted, and eight agencies did not perform user access reviews to ensure that they complied with account management policy. Additionally, 11 of the 16 agencies did not properly identify and validate information system users, which involve enforcing strong passwords and requiring passwords to be changed periodically. In addition, 11 of the 16 agencies had deficiencies in implementing access management to ensure separation of duties, or segregating work responsibilities so that one individual does not control all critical stages of a process. Without adequate access controls, unauthorized individuals, including outside intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or personal gain. According to NIST guidance on security and privacy controls, agencies should protect data at rest and in transit on their network through implementation of cryptography and other technologies to achieve confidentiality and integrity protections over that data. In addition, NIST’s guidance states that agencies should implement contingency strategies, such as conducting backups of information systems and having alternate processing and storage sites to protect data from loss during an interruption and to resume activities after an interruption. Further, NIST guidance states that agencies should develop privacy policies, procedures, and guidance for safeguarding the collection, access, use, dissemination, and storage of personally identifiable information that supports a privacy program. However, 15 of the 16 selected agencies did not effectively implement controls to protect data and ensure its privacy during fiscal year 2018. Specifically, eight of the 16 agencies did not adequately implement controls for protecting information at rest and four agencies did not adequately implement controls for ensuring the integrity and confidentiality of data in transit. In addition, five of the 16 agencies did not conduct backups of information systems and five agencies did not use alternate processing sites to retrieve backups or resume essential mission/business functions. Further, the inspectors general for 14 of the 16 agencies reported that their respective agency did not effectively document or implement policies and procedures supporting the agency’s privacy program. If agencies do not effectively implement controls to protect data and ensure its privacy, agencies may be hindered in limiting or containing the impact of a potential cybersecurity event. FISMA requires agency information security programs to include security awareness training to inform personnel of information security risks associated with their activities and responsibilities in complying with agency policies and procedures intended to reduce risk. In addition, FISMA requires agencies to provide role-based training to personnel with significant responsibilities for information security. Further, NIST guidance on building an IT security awareness and training program states that an awareness and training program is the means to communicate information that users need to support the mission of the organization, and security requirements across the agency. Most of the selected agencies exhibited deficiencies in implementing a security training program during fiscal year 2018. Only three of the 16 selected agencies effectively implemented elements of a security training program. Of the 13 agencies that had deficiencies, 12 did not ensure that personnel received security awareness training and 10 did not ensure that personnel with significant responsibilities for information security received role-based training, including nine agencies that were deficient in providing both types of training. As a result, these agencies risk having employees or contractors that are ill-prepared to protect systems, and risk inadvertently or intentionally compromising security. Most of the Selected Agencies Had Not Effectively Developed or Implemented Controls to Detect Cyber Events and Vulnerabilities Agencies are to develop and implement controls to Detect cyber events and vulnerabilities. FISMA requires agencies to develop, document, and implement an agency-wide information security program that includes periodic testing and evaluation of effectiveness and procedures for detecting security incidents. NIST guidelines define these and other activities as part of information security continuous monitoring, including: defining an information security continuous monitoring strategy and implementing an information security continuous monitoring program in accordance with that strategy; assessing and reporting on the effectiveness of all implemented collecting, correlating, and analyzing security related information obtained through information system auditing. However, as shown in figure 6, agencies exhibited deficiencies in activities associated with information security continuous monitoring. Continuous Monitoring Strategy and Program NIST’s guidance on information security continuous monitoring states that defining an information security continuous monitoring strategy and developing an information security continuous monitoring program are the first two steps in creating, implementing, and maintaining information security continuous monitoring. In addition, agencies should implement the information security continuous monitoring program in accordance with the defined strategy. However, half of the 16 selected agencies did not develop an information security continuous monitoring strategy or program, or implement the information security continuous monitoring program. Specifically, five of the agencies did not fully develop an information security continuous monitoring strategy or program. In addition, while three agencies had developed, or made organizational changes to create a foundation for, an information security continuous monitoring strategy, those agencies did not consistently or effectively implement the strategy. Without a well- designed and implemented information security continuous monitoring strategy, agencies could be hindered in assuring ongoing situational awareness of information security, vulnerabilities, and threats. As stated above, FISMA requires agencies to include periodic testing and evaluation of information security policies, procedures, and practices in agency-wide information security programs. Security control assessments determine the extent to which controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the system requirements. Most agencies assessed the controls implemented on their systems. However, seven agencies did not consistently perform system control assessments to ensure that the controls were operating effectively, or as intended. Further, seven agencies had not completed or implemented other activities in their security assessment and authorization process that assists agencies with ensuring that appropriate controls are implemented on an information system and that the system is authorized to operate. If agencies do not perform consistent testing of information security controls, they cannot determine that implemented controls are appropriately designed or operating effectively. Audit Review, Analysis, and Reporting According to NIST guidance on log management, routine log analysis is beneficial to identifying security incidents, policy violations, fraudulent activity, and operational problems. As a result, log analysis supports information security continuous monitoring capabilities. However, more than half of the 16 selected agencies did not review, analyze, and report auditable events from audit logs. For example, nine agencies did not implement audit log review capabilities on their information systems. Without reviewing, analyzing, and reporting audit logs, agencies limit their ability to identify unauthorized, unusual, or sensitive access activity on their networks. Most of the Selected Agencies Exhibited Deficiencies in Developing and Implementing Controls to Respond to Detected Cyber Intrusions Agencies should have policies and practices in place to Respond to detected incidents. FISMA requires agency information security programs to include procedures for responding to security incidents in order to mitigate risks associated with such incidents before substantial damage is done. According to NIST, incident response involves rapidly detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring IT services. An effective incident response process includes, for example: an incident handling capability that incorporates lessons learned from ongoing incident handling activities; the monitoring of incidents through documentation that includes pertinent information necessary for forensics, evaluating incident details, trends, and handling; the timely reporting of incidents with sufficient detail to allow analysis; and an incident response plan. Most of the 16 selected agencies had deficiencies in at least one of the activities associated with incident response processes, as shown in figure 7. According to NIST, agencies should have the ability to detect and analyze security incidents in order to minimize loss and destruction and mitigate the weaknesses that were exploited. In addition, agencies should incorporate lessons learned from an incident to improve existing security controls and practices. Most of the selected agencies did not report deficiencies associated with their incident handling capability, including the ability to analyze and respond to security incidents and incorporate lessons learned. However, seven agencies did not adequately implement capabilities to analyze and respond to security incidents. In addition, one of the seven agencies did not use lessons learned from prior incidents to improve incident handling. Without an effective incident handling capability, agencies have limited ability to detect and analyze security incidents to minimize destruction and mitigate exploited vulnerabilities. According to NIST, agencies should monitor and document security incidents with sufficient detail in order to effectively respond to and mitigate the risks associated with the incident. Doing so enables agencies to analyze security incidents, understand the impact of the incident, and perform analysis to identify trends and indicators of attack. Inspectors general for 12 of the 16 selected agencies did not identify deficiencies related to monitoring detected incidents. However, four agencies did not effectively monitor incidents. For example, one agency did not consistently document incidents detected and another agency had not implemented an automated enterprise tool for monitoring incidents. If agencies do not effectively implement incident monitoring processes, they hinder their ability to adequately analyze and respond to security incidents. FISMA requires agencies to develop, document, and implement an agency-wide information security program that includes procedures for reporting security incidents to US-CERT. In addition, NIST guidance states that agencies should have specific incident reporting requirements for reporting suspected security incidents to an internal incident reporting organization. However, 10 agencies had deficiencies in their implementation of incident reporting. While only two agencies did not clearly define incident reporting requirements, eight agencies did not effectively implement those requirements. For example, these agencies did not consistently categorize incidents or ensure timely reporting of incidents to US-CERT and internal reporting organizations. If agencies do not consistently categorize or report incidents in an accurate and timely manner, they cannot effectively respond to incidents because they may lack effective situational awareness in order to appropriately respond to incidents. Incident response plans are an important element to ensuring that incident response is performed effectively, efficiently, and consistently throughout the agency. Among other things, NIST guidance states that incident response plans should provide a roadmap for implementing an incident response capability, describe metrics for measuring the incident response capability, and be approved. Inspectors general for nine of the selected agencies did not report deficiencies related to incident response plans. However, seven agencies did not fully develop or monitor the effectiveness of their incident response plans. Specifically, five agencies had incident response plans that did not fully define requirements for implementing their incident response capability or were not approved. In addition, the other two agencies did not use performance metrics to verify the effectiveness of their incident response plan. Without an effective and comprehensive incident response plan, agencies cannot implement a coordinated approach to incident response. More Than Half of the Selected Agencies Had Not Adequately Developed or Implemented Practices to Recover from Cyber Events Agencies should be able to Recover from cyber events. FISMA requires agencies to develop, document, and implement an agency-wide information security program that includes plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. NIST defines contingency planning as a coordinated strategy involving plans, procedures, and technical measures that enable the recovery of information systems, operations, and data after a disruption. Contingency planning is significant to protecting electronically maintained data and an agency’s ability to process and retrieve data during and after a cyber intrusion. According to NIST, agencies should develop and document a comprehensive contingency plan or suite of related plans for restoring capabilities during and after a cyber event. The suite of related plans should include a disaster recovery plan and business impact analysis. However, 11 of the 16 selected agencies did not sufficiently plan for recovering system operations after an interruption. Specifically, these 11 agencies did not consistently develop contingency plans, to include disaster recovery plans, or other associated documentation, such as business impact analyses for all of their information systems. In addition, one agency did not define how the agency is to process and retrieve data during and after an interruption. Without an effective contingency planning process, agencies are exposed to the risk of interruptions to information system operations and disruption to their mission and business processes. Most of the 16 Selected Agencies Exhibited Deficiencies in Implementing Elements of an Information Security Program Controls associated with the five core security functions are related to elements of agencies’ information security programs. FISMA requires each agency to develop, document, and implement an information security program that includes the following eight elements: 1. periodic assessments of the risk; 2. cost-effective policies and procedures that reduce risk to an acceptable level, ensure that information security is addressed throughout the life cycle of each system, and ensure compliance with applicable requirements; 3. subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; 4. security awareness training and training for personnel with significant responsibilities for information security; 5. periodic testing and evaluation of the effectiveness of security policies, procedures, and practices; 6. a process for planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; 7. procedures for detecting, reporting, and responding to security 8. plans and procedures to ensure continuity of operations for information systems. As discussed earlier in this report, most of the 16 selected agencies had deficiencies related to implementing the eight elements of an agency- wide information security program. Figure 8 shows the number of selected agencies with deficiencies in implementing the eight elements of an agency-wide information security program. For example, of the 16 selected agencies: Eight agencies did not effectively assess risk; 11 agencies did not have policies to ensure that CIOs carried out their role as it relates to information security; Four agencies developed incomplete system security plans; 13 agencies did not ensure that personnel received security awareness training, or that personnel with security responsibilities received role-based security training; Seven agencies did not consistently perform control assessments to ensure that the controls were operating effectively, or as intended; 13 agencies did not effectively implement their POA&M process to address information security deficiencies; 13 agencies did not adequately detect or respond to incidents; and 11 agencies did not comprehensively develop plans to ensure the continuity of its operations. We and inspectors general have made numerous recommendations aimed at improving information security programs and practices over the years. Until these agencies take action to address deficiencies in implementing the eight elements of an agency-wide information security program, they lack assurance that their information systems and networks are protected from inadvertent or malicious activity. Inspectors General Determined That the 24 CFO Act Agencies Generally Did Not Have Effective Information Security Policies and Practices Inspectors general determined that few agencies covered by the CFO Act of 1990 had effective agency-wide information security programs during fiscal year 2018. Further, in agency financial statement audit reports for fiscal year 2018, inspectors general reported that they continued to identify significant deficiencies in information security controls over financial systems. As a result, inspectors general reported material weaknesses or significant deficiencies in internal control over financial reporting for fiscal year 2018. Inspectors General Indicate That Few CFO Act Agencies had Effective Information Security Programs FISMA requires inspectors general to determine the effectiveness of their respective agencies’ information security programs. To do so, OMB instructed inspectors general to provide a maturity rating for agency information security policies, procedures, and practices related to the five core security functions established in the NIST cybersecurity framework, as well as for the agency-wide information security program. For fiscal year 2018, the inspectors general for only six of the 24 CFO Act agencies reported that their agencies had an effective agency-wide information security program. However, the remaining 18 agencies were reported as having ineffective information security programs. When considering each of the five core security functions, most inspectors general reported that their agency was at Level 3 (consistently implemented) for the Identify, Protect, and Recover functions; at Level 2 (defined) for the Detect function; and at Level 4 (managed and measurable) for the Respond function, as shown in figure 9. Agency inspectors general report on the effectiveness of agencies’ information security controls as part of the annual audits of the agencies’ financial statements. The reports resulting from these audits include a description of information security control deficiencies related to the five major control categories defined by the Federal Information System Controls Audit Manual (FISCAM)—security management, access controls, configuration management, segregation of duties, and contingency planning. For fiscal year 2018, inspectors general identified information security control deficiencies related to most of the FISCAM general control categories for most of the 24 CFO Act agencies as shown in figure 10. Overall, inspectors general for the 24 CFO Act agencies continued to report deficiencies in agencies information security practices for fiscal year 2018. Specifically, during that time, 18 inspectors general designated information security as either a material weakness (6) or significant deficiency (12) in internal control over financial reporting systems for their agency. Further, inspectors general at 21 of the 24 agencies cited information security as a major management challenge for their agency for fiscal year 2018. Most of the 23 Civilian CFO Act Agencies Reported Not Fully Meeting Targets for Implementing Cyber Capabilities to Mitigate Risks OMB, in its fiscal year CIO reporting metrics, directed CIOs to assess their agencies’ progress toward achieving outcomes that strengthen federal cybersecurity. To do this, CIOs evaluated their agency’s performance in reaching targets for meeting key milestones of the current administration’s IT Modernization Cross-Agency Priority (CAP) goal. This CAP goal includes a cybersecurity initiative to mitigate the impact of risks to federal agencies’ data, systems, and networks by implementing cutting edge cybersecurity capabilities. The CAP goal’s cybersecurity initiative has three strategies that include key milestones with specific implementation targets, most of which are expected to be met by the end of fiscal year 2020. Table 4 shows the key milestones and targets related to the three strategies of the IT Modernization CAP goal’s cybersecurity initiative, as well as how many agencies were meeting the targets for each of the milestones. Overall, only two of the civilian 23 CFO Act agencies met all 10 targets for the cybersecurity initiative of the IT Modernization CAP goal, during fiscal year 2018. Whereas, 10 agencies met seven to nine of the targets and the remaining 11 agencies met six or fewer targets. More specifically, by strategy area, Seven agencies met all four targets for the manage asset security strategy. Eight agencies met all three targets for the limit personnel security strategy. Seven agencies met all three targets for the protect networks and data strategy. OMB, DHS, and NIST Acted to Fulfill Their FISMA-defined Roles, but Shortcomings Exist in Government- wide Efforts Intended to Improve Federal Information Security OMB, DHS, and NIST have ongoing and planned initiatives to support FISMA’s implementation across the federal government. Specifically, OMB developed and oversaw the implementation of information security policies, procedures, and guidelines over the past 2 years. In addition, DHS oversaw and assisted government efforts that were intended to provide adequate, risk-based, cost-effective cybersecurity. Further, NIST continued to provide guidance to federal agencies to improve information security across the government. However, beyond fiscal year 2016, OMB held CyberStat meetings at significantly fewer agencies. These meetings are intended to help ensure effective implementation of information security policies and practices. In addition, OMB’s guidance to agencies for preparing their fiscal year 2018 FISMA report does not sufficiently address FISMA’s requirement for developing subordinate plans for providing adequate information security for networks, facilities, and information systems. OMB Provided Guidance for Federal Information Security, but Missed a Reporting Deadline and Its Reporting Guidance to Agencies Did Not Sufficiently Address a FISMA Element FISMA requires that OMB submit a report to Congress no later than March 1 of each year on the effectiveness of agencies’ information security policies and practices during the preceding year. This report is to include: a summary of incidents described in the agencies’ annual reports; a description of the threshold for reporting major information security a summary of results from the annual IG evaluations of each agency’s information security program and practices; an assessment of each agency’s compliance with NIST information an assessment of agency compliance with OMB data breach notification policies and procedures. As of June 2019, OMB had not issued its annual FISMA report to Congress for fiscal year 2018. OMB officials stated that the lapse in appropriations during the start of 2019 caused a delay in the report’s development and release. The officials declined to provide a time frame for when they expected to issue the report. OMB Provided Numerous Guidance Documents to Agencies and Monitored Agencies’ Implementation of Them FISMA requires OMB to develop and oversee the implementation of policies, principles, standards, and guidelines on information security. Since the start of fiscal year 2018, OMB has developed or proposed policies and generally monitored their implementation. Specifically: In May 2019, OMB issued policy to address federal agencies’ implementation of identity, credential, and access management (ICAM). Among other things, the policy requires agencies to (1) implement identity, credential, and access management guidelines, standards, and directives issued by NIST, DHS, and the Office of Personnel Management; and (2) harmonize their enterprise-wide approach to ICAM governance, architecture, and acquisition through activities such as designating an integrated agency-wide ICAM governance structure and establishing solutions for ICAM services that are flexible and scalable. In December 2018, OMB issued a memorandum on the high-value asset (HVA) program that (1) outlined agency expectations for establishing agency governance; (2) required agencies to take action to improve the identification of HVAs; and (3) defined agency reporting, assessment, and remediation requirements for HVAs. In March 2018, OMB reported that agencies’ continued to have challenges in mitigating security vulnerabilities identified across the federal HVA landscape in its fiscal year 2017 FISMA report to Congress. In addition, OMB required agencies to report on the implementation of security controls to protect HVAs during fiscal year 2018. In October 2018, OMB issued new federal information security and privacy management guidance that required agencies to (1) report on the adequacy and effectiveness of their information security programs, (2) submit a current and prioritized list of HVAs through the Homeland Security Information Network, and (3) report major incidents to DHS, OMB, Congress and their agency inspectors general. In addition, the guidance required agencies to ensure that DHS has authorization and the information necessary to monitor and provide technical assistance related to vulnerability scanning. OMB Assessed and Reported on Agencies’ Implementation of Federal Information Security Requirements, but the Number of Agencies Scheduled to Participate in CyberStat Meetings Has Declined over the Last 3 Years In addition to developing and monitoring the implementation of information security policies, FISMA directs OMB to oversee agencies’ compliance with the act’s requirements to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, modification, or destruction of information or information systems. During fiscal year 2018, OMB issued four reports summarizing government-wide implementation of the information security requirements, as described below: In September 2018, OMB issued an assessment of intrusion detection and prevention capabilities across the federal enterprise. In its assessment, OMB briefly described federal agencies’ implementation of intrusion detection and prevention capabilities through DHS’s EINSTEIN sensor suite. In May 2018, OMB issued its Federal Cybersecurity Risk Determination Report and Action Plan. For this report, OMB evaluated risk management assessment reports for 96 agencies and described actions that it and agencies plan to take to address government-wide cybersecurity gaps. Two major actions discussed in the report are: (1) federal agencies must consolidate their security operations center capabilities and processes, or migrate the security operations center as a service; and (2) OMB, DHS, and other federal agencies are to assist with implementing the cyber threat framework developed by the Office of the Director of National Intelligence. In March 2018, OMB issued its annual FISMA report to Congress for fiscal year 2017, which summarized the performance of 97 agencies in implementing effective information security programs and managing risk, among other things. In December 2017, OMB released its Report to the President on Federal IT Modernization, which outlined a vision and recommendations for the federal government to build a more modern and secure architecture for federal systems. For example, OMB described government-wide initiatives intended to improve the security of federal networks that emphasized perimeter network- based security protections, but had gaps in the application and data- level protections needed to provide complete security. To address these deficiencies, OMB recommended a layered defensive strategy in government-wide programs to provide greater defense-in-depth capabilities that are intended to prevent malicious actors from moving laterally across linked networks to access valuable information. Number of Agencies Scheduled for CyberStat Meetings Significantly Declined Since Fiscal Year 2016 OMB, in coordination with DHS, is responsible for coordinating CyberStat review meetings. As mentioned previously, FISMA requires OMB to oversee agency compliance with requirements to provide information security protections on information and information systems. One means of fulfilling this oversight responsibility is through CyberStat engagements. For these engagements, OMB, in coordination with DHS, intends to engage agency leadership on Administration priorities and perform outreach to ensure that agencies are taking the appropriate actions to strengthen their cybersecurity posture. However, since our September 2017 report on fiscal year 2016 FISMA implementation, the number of agencies that have participated in a CyberStat engagement has significantly declined. In fiscal year 2016, OMB scheduled these engagements with 24 agencies to help develop action items that address information security risk, identify areas for targeted assistance, and track performance at the agencies throughout the year. The number of agencies scheduled to participate in an engagement decreased to five during fiscal year 2017, and decreased further to three during fiscal year 2018. As of May 2019, OMB staff in the Office of the Federal CIO informed us that the agency had not scheduled any agencies to participate in a CyberStat engagement during fiscal year 2019. According to OMB officials in the Office of the Federal CIO, updates to the CyberStat process resulted in extended engagements between DHS, OMB, and the agencies that lasted 4 to 6 weeks or more. Beginning in fiscal year 2017, according to DHS’s CyberStat concept of operations, OMB and DHS took a collaborative approach with the CyberStat process. Specifically, officials from the participating agencies, OMB’s Cyber and National Security Unit, and DHS’s Federal Network Resilience (FNR) division collaborated through these CyberStat engagements to reach a desired performance outcome at the participating agencies. DHS’s CyberStat concept of operations states that the department focuses on agency performance in key federal information security reporting, including agency FISMA reporting, DHS reports of agency compliance with binding operational directives, and reports issued by GAO and agency inspectors general. A DHS official from the department’s FNR division informed us that it uses these information security reports to make recommendations to OMB, who then decides which agencies will be scheduled to participate in a CyberStat engagement. According to OMB, the three agencies that participated in a CyberStat engagement initiated during fiscal year 2018 volunteered to do so after discussing their cybersecurity implementation issues with OMB. However, as discussed earlier in this report, deficiencies reported in agency fiscal year 2018 FISMA reports and information security evaluation reports issued by GAO and inspectors general for fiscal year 2018 indicate that several agencies are in need of OMB and DHS assistance to improve their information security posture. In addition, the three agencies that participated in CyberStat engagements scheduled during fiscal year 2018 saw value in changes resulting from the updated engagement process. For example, officials from the Office of the CIO (OCIO) at one of the three agencies stated that the updated process was more constructive and valuable than the prior CyberStat process that was based more on a compliance checklist. In addition, OCIO officials at all three agencies stated that the process helped improve their agencies’ information security posture and that their collaboration with OMB and DHS was beneficial to assisting with FISMA implementation. By conducting fewer CyberStat engagements with agencies, OMB loses an opportunity to assist agencies with improving their information security posture. Additionally, OMB will limit its ability to oversee specific agency efforts to provide information security protections for federal information and information systems. Inspector General Reporting Metrics Did Not Sufficiently Cover System Security Plans FISMA includes reporting requirements for OMB, agency CIOs and inspectors general. According to OMB’s FISMA reporting guidance, OMB and DHS collaborate with interagency and inspector general partners to develop the CIO and inspector general metrics, which are intended to facilitate agencies’ compliance with FISMA-related reporting requirements. These entities created separate sets of reporting metrics for agency CIOs and agency inspectors general. However, the inspector general reporting metrics did not specifically address the development and maintenance of system security plans, although subordinate plans, such as system security plans, are a key element of an agency-wide information security program required by FISMA. OMB officials in the Office of the Federal CIO informed us that, while they work in coordination with CIGIE to establish the reporting metrics, CIGIE is ultimately responsible for developing the metrics. According to both the published metrics and OMB’s guidance memorandum, OMB collaborates with DHS and inspector general partners to develop the IG FISMA metrics. According to representatives from CIGIE, the existence of system security plans is addressed in multiple questions within the reporting metrics, which is in alignment with OMB’s focus toward ongoing assessments and authorizations. Nevertheless, our review of the reporting metrics and supplemental evaluation guide did not identify any reference to the development and maintenance of system security plans. The lack of a defined reporting metric for addressing agency system security plans could lead to inconsistent reporting by inspectors general. Until such a metric is developed and reported on, OMB will not have reasonable assurance that inspectors general evaluations appropriately address each of the required elements of an information security program. DHS Continued to Issue Cybersecurity-related Directives and Assist Agencies by Providing Common Security Capabilities Under FISMA, DHS, in consultation with OMB, is responsible for carrying out various activities, including developing and overseeing the implementation of binding operational directives and providing operational and technical assistance to agencies. Over the last 2 years, DHS had developed four binding operational directives as of April 2019, as required by FISMA. These directives instructed agencies to: remove and discontinue use of all present and future Kaspersky- branded products; enhance email security by adopting domain-based message authentication, reporting and conformance (DMARC) to prevent email spoofing and web security by ensuring all publicly accessible federal websites provides services through a secure connection; submit a current and prioritized high-value asset list to DHS and if selected, participate in risk and vulnerability assessments; and review and remediate critical and high vulnerabilities on internet- facing systems within 15 and 30 calendar days of initial detection, respectively. We have ongoing work evaluating DHS’s process to develop and oversee the implementation of binding operational directives as part of another engagement. We will report on the results of this evaluation in a separate report. DHS also provided operational and technical assistance to agencies through its Continuous Diagnostics and Mitigation (CDM) and National Cybersecurity Protection System (NCPS) programs. DHS is taking steps to deploy the CDM and NCPS capabilities to all participating federal agencies to enhance detection of cyber vulnerabilities and protection from cyber threats. Continuous Diagnostics and Mitigation program (CDM). The program is to provide federal departments and agencies with commercial off-the- shelf capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritize these risks based upon potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. In December 2018, we reported that the department was in the process of enhancing the capabilities of federal agencies to automate network monitoring for malicious activity through its CDM program. In our December report, we also recommended that DHS coordinate further with federal agencies to identify training and guidance needs for implementing CDM. DHS plans to complete implementation of our recommendation this fiscal year. In addition, we have an ongoing review to evaluate the extent to which selected agencies have effectively implemented CDM and to identify practices for effective and efficient implementation of the program. We will report on the results of this review separately. National Cybersecurity Protection System (NCPS). The program is one of the tools to aid federal agencies in mitigating information security threats. The system is intended to provide DHS with the capability to provide four cyber-related services to federal agencies: intrusion detection, intrusion prevention, analytics, and information sharing. In January 2016, we made nine recommendations to further improve NCPS capabilities by, among other things, developing metrics that clearly measure the effectiveness of NCPS’s efforts, including the quality, efficiency, and accuracy of actions related to detecting and preventing intrusions, providing analytic services, and sharing cyber-related information. As of June 2019, DHS had implemented six of our nine recommendations and plans to implement the remainder by the end of this fiscal year. NIST Continues to Provide Information Security Guidance to Agencies According to FISMA, NIST is to develop information security standards and guidelines, in coordination with OMB and DHS. Specifically, NIST’s Computer Security Division is responsible for developing cybersecurity standards, guidelines, tests, and metrics for the protection of federal information systems. NIST has developed information security guidelines for federal agencies. Specifically, in April 2018, NIST issued an update to its cybersecurity framework that it originally issued in February 2014. Although the cybersecurity framework was initially intended for critical infrastructure, Executive Order 13800 requires federal agencies to use the cybersecurity framework to also manage their cybersecurity risk. The revised framework includes a new section on cybersecurity measurement; an expanded explanation of using the framework for cyber supply chain risk management; refinements to authentication, authorization, and identity proofing policies within access controls; and a new section on using the cybersecurity framework to understand and assess an organization’s cybersecurity risk. In May 2017, NIST published draft guidance for agencies to use in implementing the cybersecurity framework. This publication is intended to provide guidance on the use of the framework in conjunction with the current and planned suite of NIST security and privacy risk management publications, such as NIST Special Publication 800-53. According to NIST officials in the agency’s Computer Security Division, the agency is in the process of finalizing the implementation guidance and plans to publish the final version by the end of fiscal year 2019. Further, in December 2018, NIST released the revised Risk Management Framework for Information Systems and Organizations (risk management framework). According to NIST, the update provides an integrated, robust, and flexible methodology to address security and privacy risk management. Among the changes in the updated version is the integration of privacy risk management into the existing information security risk management processes. In addition, the risk management framework includes direct references to the cybersecurity framework, which demonstrates how organizations that implement the risk management framework can also achieve the outcomes of the cybersecurity framework. In April 2019, NIST released revised guidance on vetting the security of mobile applications. According to NIST, the revised publication provides guidance for planning and implementing a mobile application vetting process, developing security requirements for mobile applications, identifying appropriate tools for testing mobile applications, and determining if a mobile application is acceptable for deployment on an organization’s mobile devices. In addition, NIST is currently developing a privacy framework to help improve agencies’ privacy risk management. In April 2019, NIST issued a discussion draft for its privacy framework. According to the discussion draft, NIST will use feedback received on the discussion draft to develop a preliminary draft of the privacy framework, which is intended to assist organizations in identifying, assessing, and responding to privacy risks. Further, the framework is intended to foster the development of innovative approaches to protecting individuals’ privacy and increase trust in systems, products and services. According to NIST officials, the agency continues to engage stakeholders, both nationally and internationally, through roundtable meetings, webinars, and public workshops to solicit stakeholder input to inform development of this framework. NIST’s website states that the agency anticipates publishing the privacy framework in October 2019. Conclusions Federal agencies continued to have deficiencies in implementing information security programs and practices. Inspectors general reported that 18 of 24 CFO Act agencies did not have effective agency-wide information security programs in fiscal year 2018. In addition, most of the selected agencies had deficiencies in the five core security functions. We and the inspectors general have made thousands of recommendations aimed at improving information security programs and practices over the years. Implementation of these recommendations will assist agencies in strengthening their information security policies and practices. OMB, DHS, and NIST have issued directives and guidance and implemented programs that, to some extent, have improved agencies’ security posture. However, OMB has not issued its report to Congress on the effectiveness of agencies’ information security policies and practices for fiscal year 2018, although the report was due several months ago. Further, while agencies indicated that the collaborative CyberStat engagements with DHS and OMB have aided with their FISMA implementation, the number of these engagements has declined significantly. In addition, the OMB-approved metrics that inspectors general use to evaluate FISMA implementation do not include one of the elements—system security plans—required by FISMA for information security programs. By not including this element, oversight of agencies’ information security programs has been diminished. Recommendations for Executive Action We are making the following three recommendations to OMB: The Director of OMB should submit the statutorily required report to Congress on the effectiveness of agencies’ information security policies and practices during the preceding year. (Recommendation 1) The Director of OMB should expand its coordination of CyberStat review meetings for those agencies with a demonstrated need for assistance in implementing information security. (Recommendation 2) The Director of OMB should collaborate with CIGIE to ensure that the inspector general reporting metrics include the FISMA-required information security program element for system security plans. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to OMB and the 28 selected agencies for review and comment. In response, OMB provided comments orally and via email in which the office, respectively, generally concurred with our first two recommendations and concurred with a revised version of our third recommendation. Specifically, in oral comments, officials in the Office of the Federal Chief Information Officer noted actions that they said OMB plans to take to address our first two recommendations. According to these officials, the office plans to issue its fiscal year 2018 report to Congress on the effectiveness of agencies’ information security policies and practices in the near future. In addition, the office plans to continue to collaborate with DHS to identify information security gaps at agencies and work with agencies to address those gaps in CyberStat meetings or by other means. With regard to our third recommendation, the officials expressed concern with the wording of the recommendation in our draft report, which related to OMB updating the IG metrics. They noted that CIGIE, rather than OMB, is responsible for updating these metrics. Accordingly, we revised the recommendation to emphasize the need for OMB to collaborate with CIGIE. In a subsequent email from our OMB liaison, the office concurred with the revised recommendation. The office emphasized its plans to continue working collaboratively with the inspector general community to assist with improving and evolving the metrics to ensure that the metrics address FISMA requirements. OMB also provided technical comments, which we incorporated, as appropriate. In addition, five of the 28 selected agencies provided written responses regarding the draft report: In its response (reprinted in appendix III), the Department of Housing and Urban Development stated that it had reviewed our draft report and had no comments. In its comments (reprinted in appendix IV), the Department of Veterans Affairs stated that it remains committed to complying with the requirements of FISMA and to safeguarding the department’s systems and data, which support the delivery of care, benefits, and services to veterans. The department also stated that it continues to prioritize efforts to address our prior information security-related recommendations to the department. In its response (reprinted in appendix V), the Environmental Protection Agency stated that it had reviewed our draft report and had no comments. In its comments (reprinted in appendix VI), the Social Security Administration stated that it will continue to improve its cybersecurity safeguards and looks forward to receiving additional guidance to assist the agency with its efforts. In its comments (reprinted in appendix VII), the U.S. Agency for International Development stated that it has developed, documented, and implemented an agency-wide program to provide security for its information and systems, pointing out that its inspector general reported that the agency had an effective program in fiscal year 2018. The agency also cited its commitment to continuing compliance with FISMA’s requirements and to safeguarding its information technology services to facilitate its mission. Further, four of the selected agencies—the Departments of Commerce, Homeland Security, and Transportation, as well as the National Science Foundation—also provided technical comments which we have incorporated in the report, where appropriate. The remaining 19 selected agencies provided emails stating that they had no comments on the report. These agencies were the Departments of Agriculture, Defense, Education, Energy, Health and Human Services, the Interior, Justice, Labor, State, and the Treasury; and the Federal Communications Commission; Federal Retirement Thrift Investment Board; General Services Administration; Merit System Protection Board; National Aeronautics and Space Administration; Nuclear Regulatory Commission; Office of Personnel Management; Presidio Trust; and Small Business Administration. We are sending copies of this report to appropriate congressional committees, the Director of OMB, the heads of the CFO Act agencies and their inspectors general, the heads of four selected non-CFO Act agencies, and other interested congressional parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe the reported adequacy and effectiveness of selected federal agencies’ information security policies and practices and (2) evaluate the extent to which the Office of Management and Budget (OMB), the Department of Homeland Security (DHS), and the National Institute of Standards and Technology (NIST) have implemented their government-wide Federal Information Security Modernization Act of 2014 (FISMA) requirements. To describe the reported adequacy and effectiveness of federal agencies’ information security policies and practices, we analyzed our, agency, and inspectors general information security-related reports for 16 selected agencies. Our selection of 16 agencies included 12 Chief Financial Officers (CFO) Act of 1990 agencies and four non-CFO Act agencies. To select the 12 CFO Act agencies, we first ranked the 23 civilian CFO Act agencies by the number of information security systems each agency reported operating in fiscal year 2017. We then separated the agencies into large, medium, and small categories based on the number of systems they reported, and selected four agencies from each category using a random number generator. To select the four non-CFO Act agencies, we listed the 73 non-CFO Act agencies reported in OMB’s annual FISMA report to Congress for fiscal year 2017 and then randomly selected four agencies. Although we randomly selected agencies and assured we had CFO Act and non-CFO Act agencies, due to the small number of agencies examined, results based on these agencies do not generalize beyond the agencies reviewed. The 16 agencies were the Departments of the Agriculture, Commerce, Education, Housing and Urban Development, Justice, Labor, State, and the Treasury; the Environmental Protection Agency; Federal Communications Commission; Federal Retirement Thrift Investment Board; Merit Systems Protection Board; National Aeronautics and Space Administration; Presidio Trust; Small Business Administration; and the Social Security Administration. For these agencies, we analyzed, categorized, and summarized weaknesses identified in inspector general and GAO reports using the NIST Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity framework) core security functions and the eight elements of information security programs required by FISMA. In addition, for the 24 agencies covered by the CFO Act, we summarized (1) the inspector general ratings of agency-wide information security programs and (2) the inspector general designation of information security as a significant deficiency or a material weakness for financial reporting systems as reported for fiscal year 2018. For the 23 civilian agencies covered by the CFO Act, we summarized fiscal year 2018 agency Chief Information Officer (CIO) reports of their agency’s progress in meeting targets for implementing cyber capabilities supporting the Administration’s cybersecurity-related Cross-Agency Priority (CAP) goal. To gain insight into how agencies collect, report, and ensure the accuracy and completeness of the FISMA data they report, we analyzed documentation describing and supporting the processes at eight of the 16 selected agencies to ensure the accuracy and completeness of those data. We also interviewed officials at the eight agencies to obtain additional information on the quality controls implemented on the system used for FISMA reporting. The eight agencies selected were the Departments of Education, Justice, Labor, and the Treasury; the Federal Communications Commission; National Aeronautics and Space Administration; Presidio Trust; and the Small Business Administration. These agencies were randomly selected from the list of 16 agencies described above. Based on our assessment, we determined that the data were sufficiently reliable for the purpose of our reporting objectives. To evaluate the extent to which OMB, DHS, and NIST have implemented FISMA requirements, we analyzed the FISMA provisions to identify federal responsibilities for OMB, DHS, and NIST. We evaluated documentation of these agencies’ government-wide responsibilities to determine if the agencies were meeting FISMA requirements, including documentation obtained from their websites. Specifically, for OMB, we collected and reviewed information security-related policies and guidance that it issued since we last reported in September 2017. We also obtained reports issued by OMB to determine the extent to which the agency had overseen the policies and guidelines it issued, as well as other agency efforts for improving information security. In addition, we analyzed fiscal year 2018 inspector general and CIO FISMA reporting metrics to determine if the metrics sufficiently addressed the agency-wide information security program elements required by FISMA. We also interviewed OMB officials to obtain information on any actions they have planned or taken to improve the information security posture of the federal government. Further, we interviewed OMB and DHS officials to understand their process for scheduling CyberStat engagements with senior agency officials. We also interviewed officials at the three agencies that participated in a CyberStat engagement initiated during fiscal year 2018 to understand the benefits and challenges of their collaboration with OMB and DHS. For DHS, we reviewed and summarized a recently issued GAO report describing updates to the department’s Continuous Diagnostic and Mitigation Program and National Cybersecurity Protection System. We also collected and summarized the binding operational directives issued by DHS over the last 2 years. Further, we interviewed DHS officials to obtain information on any actions they have planned or taken to improve the information security posture of the federal government. For NIST, we collected and summarized the standards and guidance issued or updated by the agency since the start of fiscal year 2018. We also interviewed NIST officials and obtained information on draft standards and guidance to describe NIST’s current and planned efforts to help improve the information security posture of the federal government. We conducted this performance audit from December 2018 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Cybersecurity Framework The National Institute of Standards and Technology established the cybersecurity framework to provide guidance for cybersecurity activities within the private sector and government agencies at all levels. The cybersecurity framework consists of five core functions: identify, protect, detect, respond, and recover. Within the five functions are 23 categories and 108 subcategories that define discrete outcomes for each function, as described in table 5. Appendix III: Comments from the Department of Housing and Urban Development Appendix IV: Comments from the Department of Veterans Affairs Appendix V: Comments from the Environmental Protection Agency Appendix VI: Comments from the Social Security Administration Appendix VII: Comments from the U.S. Agency for International Development Appendix VIII: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Jeffrey Knott (assistant director), Di’Mond Spencer (analyst-in-charge), Andrew Ahn, Chris Businsky, Fatima Jahan, and Priscilla Smith made key contributions to this report.
Why GAO Did This Study For 22 years, GAO has designated information security as a government-wide high-risk area. FISMA requires federal agencies to develop, document, and implement information security programs and have independent evaluations of those programs and practices. It also assigns government-wide responsibilities for information security to OMB, DHS, and NIST. FISMA includes a provision for GAO to periodically report to Congress on agencies' implementation of the act. GAO's objectives in this report were to (1) describe the reported adequacy and effectiveness of selected federal agencies' information security policies and practices and (2) evaluate the extent to which OMB, DHS, and NIST have implemented their government-wide FISMA requirements. GAO categorized information security deficiencies as reported by 16 randomly selected agencies and their IGs according to the elements of an information security program; evaluated IG reports for 24 CFO Act agencies; examined OMB, DHS, and NIST documents; and interviewed agency officials. What GAO Found During fiscal year 2018, many federal agencies were often not adequately or effectively implementing their information security policies and practices. For example, most of the 16 agencies GAO selected for review had deficiencies related to implementing the eight elements of an agency-wide information security program required by the Federal Information Security Modernization Act of 2014 (FISMA) (see figure) . Further, inspectors general (IGs) reported that 18 of the 24 Chief Financial Officers (CFO) Act of 1990 agencies did not have effective agency-wide information security programs. GAO and IGs have previously made numerous recommendations to agencies to address such deficiencies, but many of these recommendations remain unimplemented. With certain exceptions, the Office of Management and Budget (OMB), Department of Homeland Security (DHS), and National Institute of Standards and Technology (NIST) were generally implementing their government-wide FISMA requirements, including issuing guidance and implementing programs that are intended to improve agencies' information security. However, OMB has not submitted its required FISMA report to Congress for fiscal year 2018 and has reduced the number of agencies at which it holds CyberStat meetings from 24 in fiscal year 2016 to three in fiscal year 2018—thereby restricting key activities for overseeing agencies' implementation of information security. Also, OMB, in collaboration with the Council of Inspectors General for Integrity and Efficiency (CIGIE), did not include a metric for system security plans, one of the required information security program elements, in its guidance on FISMA reporting. As a result, oversight of agencies' information security programs was diminished. What GAO Recommends GAO is making three recommendations to OMB to (1) submit its FISMA report to Congress for fiscal year 2018, (2) expand its coordination of CyberStat meetings with agencies, and (3) collaborate with CIGIE to update the inspector general FISMA reporting metrics to include assessing system security plans. OMB generally agreed with GAO's recommendations.
gao_GAO-19-242
gao_GAO-19-242_0
Background Depots and Related Organizations Depots are government-owned, government-operated industrial installations that maintain, overhaul, and repair a multitude of complex military weapons systems and equipment for the Department of Defense. These depots are essential to maintaining readiness for DOD, and they have a key role in sustaining weapon systems and equipment in both peacetime and during mobilization, contingency, or other emergency. There are 21 depots operated by the military services that are subject to the 6 percent minimum investment requirement (the “6 percent rule”)—four are Naval Shipyards, three are Navy Fleet Readiness Centers, two are Marine Corps Production Plants, three are Air Force Air Logistics Complexes, and nine are Army Depots and Arsenals. Figure 1 shows the location of these 21 depots across the United States. The depots are part of a larger, DOD-wide logistics enterprise that involves a number of different organizations. The Office of the Under Secretary of Defense for Acquisition and Sustainment is responsible for establishing policies for access to, and maintenance of, the defense industrial base, including depots. Specifically, the office is tasked with establishing policies and procedures for the management of DOD installations and environment to support military readiness with regard to facility construction, sustainment, and modernization. The Assistant Secretary of Defense for Sustainment serves as the principal assistant and advisor to the Under Secretary of Defense for Acquisition and Sustainment on material readiness. Among other responsibilities, the Assistant Secretary of Defense for Sustainment prescribes policies and procedures on maintenance, materiel readiness and sustainment support. DOD officials report that the Office of the Deputy Assistant Secretary of Defense for Materiel Readiness is responsible for maintenance policy along with the development of a strategic vision for DOD’s organic depot base. Finally, each service has its own logistics or materiel command component, which provides day-to-day management and oversight of the services’ depots (see fig. 2). In addition, service support commands such as Naval Facilities Engineering Command can provide expertise in project design or facility management. Depot Maintenance Process and the Effects of Maintenance Delays on Readiness and Costs Depot maintenance across the services generally involves three primary steps: planning, disassembly, and rebuilding. During each step, the depots rely on their facilities and equipment to ensure that they can conduct the large number of activities needed to repair DOD’s complex weapon systems and return them to the warfighter to be used during training and operations. Repair duration for each system varies according to the complexity of the repair and the type of use the system has experienced since the last overhaul. Because repair times vary, demands on depot facilities and equipment also vary. Delays in depot maintenance can directly affect the services’ readiness by hindering their ability to conduct training and operations using these weapon systems. For example: We reported in May 2016 that the Navy’s implementation of sustainable operational schedules—and readiness recovery more broadly—is premised on adherence to deployment, training, and maintenance schedules. However, we found that the Navy was having difficulty implementing its new schedule as intended, in part because public shipyards were challenged to complete maintenance on time. Specifically, we reported in December 2018 that in fiscal years 2012 through 2018, maintenance overruns on aircraft carrier repairs resulted in a total of 1,207 days of maintenance delay—days that ships were not available for operations—the equivalent of losing the use of 0.5 aircraft carriers each year. Similarly, in fiscal years 2012 through 2018, maintenance overruns on submarine repairs resulted in a total of 7,321 days of maintenance delay—the equivalent of losing the use of almost three submarines each year. We found in September 2018 that depot maintenance delays, among other challenges, limit the Navy, Air Force, and Marine Corps’ ability to keep aviation units ready by reducing the number of aircraft that are available to squadrons for conducting full spectrum training. We reported in June 2018 that the Army’s depots, which conduct reset and recapitalization to extend the life of the Patriot surface-to-air missile system, have often returned equipment to Patriot units late, which has affected unit training. Specifically, we found that of the seven Patriot battalions that underwent reset from fiscal years 2014 through 2017, only one received its equipment within 180 days in accordance with Army policy. Depot maintenance delays also cause the services to incur costs for which they receive no capability. For example, we reported in November 2018 that the Navy is incurring significant costs associated with maintenance delays on attack submarines. We estimated that from fiscal years 2008 to 2018, the Navy had spent more than $1.5 billion—in fiscal year 2018 constant dollars—to crew, maintain, and support attack submarines that provided no operational capability. This was a result of the submarines sitting idle and unable to conduct normal operations while waiting to enter the shipyards, and from being delayed in completing their maintenance at the shipyard. Depot Facilities and Equipment Are Key to Efficient and Effective Depot Maintenance Our previous work has identified multiple factors that can affect depot performance, including the size and skill of the depot workforce, the condition of weapon systems upon arrival at the depot, the availability of spare parts, and the condition of the depot’s facilities and equipment, among others (see fig. 3). In addition, all of these factors can be affected by funding and operational considerations (such as unexpected accidents). DOD officials have stated that disruptions to funding, to include continuing resolutions, affect the ability to conduct depot maintenance. Depots rely on working and efficient facilities and equipment to complete repairs and overhauls, and DOD maintenance officials have stated that any underlying conditions – such as leaks, lack of capacity, inefficient layouts, and breakdowns – require workarounds. Facilities are defined as any building, structure, or linear structure (such as a fence or railway). Equipment includes all nonexpendable items needed to outfit or equip an organization; for the depots, that includes items used by depot personnel to conduct depot-level maintenance, such as tools, test equipment, machining equipment, and test stands. We have previously noted that workarounds are additional efforts to complete the task that can delay maintenance, negatively affect productivity, and increase costs of depot maintenance. Functioning depot facilities and equipment are essential to a number of depot processes, as shown in figure 4. These facilities and equipment often require significant investment to plan, construct, install, repair, and modernize. For example, new DOD depot facilities can cost millions of dollars and are generally expected to last around 67 years, though facilities can, through restoration and modernization efforts, operate significantly longer. Equipment generally lasts for a shorter length of time, though equipment used in production can be expected to last 10 years or more and can be costly. Because these facility and equipment investments can take years to plan and require significant resources, a depot’s decision to invest must often take place well in advance of the specific need the facility or equipment is intended to serve. Other factors that the depots consider when planning investments include topography, flood plains, environmental and historic preservation needs, roads and parking, utilities, and the effect on continuing depot operations. This makes careful planning and management of these investments essential to ensuring that critical capabilities are not neglected. In fiscal year 2007, Congress enacted the 6 percent rule, requiring each military department to invest in the capital budgets of its depots no less than 6 percent of the average total dollar value of the combined maintenance, repair, and overhaul workload funded at all the depots of that department over the preceding 3 fiscal years. The departments generally met the minimum investment requirement from fiscal year 2007 through fiscal year 2017, as we discuss in more detail in appendix I. Poor Condition of Facilities and Equipment Hinders Depot Performance, but the Services Do Not Consistently Track These Effects Our analysis of service metrics shows that depot facilities are, on average, rated as “poor” on DOD’s facility rating scale, and the age of equipment at the depots generally exceeds its expected useful life. Meanwhile, performance at the service depots has generally declined since fiscal year 2007. Our previous work has shown that facility and equipment condition can affect depot performance. However, the military services do not consistently track the extent to which the condition of facilities and equipment affect depot performance. Majority of Depot Facilities Are in Poor Condition and Equipment Generally Exceeds Its Expected Useful Life Depot Facilities Navy Aviation Depots Rely on Many Facilities from World War II Era While service officials do not consider the age of a facility to be an ideal indicator of its overall health – since the services regularly restore and modernize older facilities rather than build new ones – the age of facilities can still offer insight into some of the depots’ challenges. For example, over 30 million square feet at the Navy aviation depots was built during the 1940’s – more than one-third of its existing space. components of a facility—such as the electrical and plumbing systems and use these assessments to develop a condition rating that summarizes the overall health of the facility. In turn, these condition ratings help service officials plan investment strategies and prioritize depot projects. The condition rating does not necessarily correlate with the age of the facility (see sidebar); a relatively new facility might have a poor condition rating if it has been damaged, for example, and an old facility that has recently been modernized might have a high condition rating. Our analysis of fiscal year 2017 depot facilities data found that the average weighted condition rating at a majority of the 21 service depots is poor. Specifically, 12 of the 21 depots–more than half–have average condition ratings that are below 80, indicating that they are in “poor” condition (see fig. 5). Of the remaining depots, five had an average rating in the “fair” category, and four had an average rating in the “good” category. Officials note that older facilities can face additional challenges, such as electrical systems built for different weapon systems, historical preservation requirements, and suboptimal layouts. It can be difficult for a depot to maintain complex, modern weapon systems, such as the F/A-18, with facilities that were designed for less complex systems. Equipment is generally past its expected useful life at most military depots. Each piece of capital equipment has an expected service life, which indicates the number of years that the equipment is expected to operate. Equipment can be operated past its expected service life. However, equipment that is past its expected service life can pose an increased risk for maintenance delays or higher maintenance costs, affecting the depots’ ability to conduct work. As we have previously reported, aging equipment can present a number of challenges, such as more frequent breakdowns, less effective or efficient operation, and safety hazards. Our analysis shows that most of the 21 depots reviewed rely on equipment that is past its expected useful life (see fig. 7). As Figure 7 shows, only three depots rely on equipment that is, on average, within its useful life. Three other depots were unable to provide data. For more detailed information about equipment age and equipment repairs at individual depots, see appendixes II through XXII. Poor Condition of Depot Facilities and Equipment Contributes to Worsening Performance The service depots have generally experienced worsening performance in terms of completing maintenance on time or in the required amount over the past decade. The Navy aviation depots have seen decreases in their timely completion of maintenance for aircraft, engines and modules, and components. For example, on-time performance for aircraft completed at the Navy’s three aviation depots has decreased from about 56 percent in fiscal year 2007 to about 31 percent in fiscal year 2017 (see fig. 8). This occurred even though the number of aircraft scheduled for repair over that same time period declined by about 26 percent. Similarly, the three Air Force aviation depots’ on-time performance has decreased over this same time period from about 98 percent on-time aircraft completions in fiscal year 2007 to about 81 percent on-time aircraft completions in fiscal year 2017 (see fig. 9). This decrease occurred even though the number of aircraft scheduled for repair declined by approximately 15 percent. Naval shipyards have also experienced performance challenges, such as an increase in maintenance delays (see fig. 10). Our analysis shows that the number of days of maintenance delay at the four Navy shipyards has increased by about 45 percent from fiscal year 2007 through 2017, from 986 days in fiscal year 2007 to 1,431 days in fiscal year 2017. We have previously reported that from fiscal year 2008 through fiscal year 2018, the Navy incurred $1.5 billion in fiscal year 2018 constant dollars to crew, maintain, and support attack submarines that provided no operational capability as a result of the submarines sitting idle while waiting to enter the shipyards and from being delayed in completing their maintenance at the shipyards. Army depot data is mixed—our analysis shows that the performance at two depots has decreased, but for others it has held steady or improved. See figure 11 below for changes over time in performance. Finally, the Marine Corps depot output decreased by less than 1 percent, as shown in figure 12. The depots rely on their facilities and equipment to ensure they can conduct the large number of activities needed to efficiently repair DOD’s complex weapons systems. Inadequate facilities can make the overall repair process less efficient, as maintainers perform workarounds that can increase maintenance time and costs. Because the depots are generally operating with equipment past its expected useful life, the depots may be incurring costs related to operating aging equipment – including performing equipment repairs, procuring spare parts, and expending labor hours to repair equipment – while at the same time delaying mission-related work. For example: At Albany Production Plant, officials told us that a shortage of paint booths results in vehicles remaining unpainted and stored outside. Exposure to the elements can cause flash rusting in the event of rain or high humidity, necessitating retreatment that increases both maintenance time and cost. At Norfolk Naval Shipyard, officials had to re-inspect 10 years’ of parts made in a single furnace, after it was discovered that the controls on the furnace were reading incorrectly. At Corpus Christi Army Depot, depot documentation shows that engines are moved nearly 5 miles across the depots during their repair process. According to officials at the depot, this is the result of years of incremental construction that did not allow them to optimize their workflow. At Fleet Readiness Center Southwest, officials told us that they had to develop an inefficient repair process to maintain the CMV-22 due to a lack of hangars that could accommodate the large aircraft. While maintenance delays can be brief, extended maintenance delays can prevent the timely return of weapon systems to operational status. Delays can cause the services to incur operating and support costs without an operational benefit. Lack of weapon systems can also cause other negative effects such as an inability to train people to use the system, leading to a reduction in readiness. The services have used various facility strategies to keep the depots operating, such as restoring and modernizing facilities when funding was available, developing workarounds when space or funding was not available, or continuing to use the inadequate facilities. Over time, this patchwork of old, modernized, and workaround solutions for new weapons systems can result in suboptimzed workflow that adds time and cost to the maintenance process, which can ultimately affect readiness. For example, at Production Plant Albany, the depot has four welding centers in different locations throughout the depot. According to officials, they utilized these welding centers over time as needs arose, and the centers are not ideally located for an efficient work flow. This means that the depot has to provide welding supplies, shift maintainers between, and deliver vehicles to and from these different locations. Alternatively, investments that optimize depot facilities and equipment can positively affect maintenance efficiency. For example: Fleet Readiness Center Southwest recently built a new facility that optimizes the workflow for its repairs of H-60 helicopters. Officials stated that its previous H-60 facility could only fit eight helicopters at a time, and only by crowding them such that using the crane on one required others to be moved as well, adding time and workload to the maintenance process. The new facility can accommodate more than 30 H-60s at a time, and each can be brought into and out of the facility without requiring others to be moved. As part of this effort, the depot also invested in additional lighting, ventilation, and crane capabilities that depot officials stated have increased the depot’s capacity for conducting H-60 repairs by more than 20 percent over their previous facility. At Corpus Christi Army Depot, planners have designed a multiphase workflow for their engine and component repairs that involves investing in a new facility and related equipment. Officials noted that the current engine repair process has developed over decades, and is spread throughout the depot. The redesigned process, which involves several investments over more than two decades, is intended to have a more efficient workflow. An Army analysis estimated that this investment will reduce the time it takes to repair and test engines and components and could result in the depot requiring about 200,000 fewer labor hours, saving about $10 million in labor costs annually. The Naval Shipyard Optimization Plan released by the Navy in February 2018 addresses the shipyards’ ability to maintain the current fleet, and projects that facility and equipment investments at the shipyards will increase efficiency and save resources. For example, the plan estimates that optimized facilities and equipment will save the shipyards over 325,000 labor days per year. The Military Services Do Not Consistently Track the Extent to Which Facility and Equipment Conditions Delay Maintenance Despite the negative effect that poor conditions can have on depot performance, the military services do not consistently track when facilities and equipment conditions lead to maintenance delays. Based on our analysis, the services each track a form of maintenance delay— specifically, work stoppages caused by either equipment or facility conditions. Work stoppages are circumstances where maintenance can no longer proceed because the depot does not have everything it needs, including the facility space to begin additional work or equipment needed to perform a certain function. However, table 1 below shows that although the services have the ability to track work stoppages, they do not all track both facility and equipment-related maintenance delays across all their depots. Further, even within a service, the depots may use different methodologies. Different methodologies make it difficult to compare across depots and identify issues. For example, according to Navy officials, the Navy aviation depots track work stoppages, but each depot uses different standards for determining which incidents are tracked. This means that an event counted as a work stoppage at one location might not be counted at another location. Standards for Internal Control in the Federal Government states that management should use quality information to achieve an entity’s objectives. However, the depots do not track maintenance delays caused by facility and equipment conditions, such as work stoppages, more consistently because there is currently no requirement from their respective materiel commands to do so. Every year, the services spend millions of dollars on depot facilities and equipment to meet their minimum investment requirement. Establishing measures and using them to track maintenance delays caused by facility and equipment conditions would help the services to make better investment decisions because they could target investments to facility and equipment needs that would have the greatest impact on repair times or other key performance goals. Without knowing how often facility and equipment conditions lead to work delays, the services risk investing in less critical infrastructure and experiencing more work stoppages due to facility or equipment conditions. DOD’s Approach for Guiding Depot Investments Lacks Key Elements Important to Addressing the Depots’ Challenges Efficiently and Effectively The military services are developing optimization plans for their depots, but these plans lack analytically-based goals, results-oriented metrics, a full accounting of the resources, risks, and stakeholders, and a process for reporting on progress. Including these elements could enhance the effectiveness of service depot investments. Furthermore, there is currently no process at the Office of the Secretary of Defense level that monitors depot investment decisions or provides regular reporting to decision makers and Congress. The Military Services Are Developing Optimization Plans, but These Plans Lack Key Elements to Guide Depot Investment The services have each begun to develop depot optimization plans, as directed by Congress. In June 2018 Congress directed the Secretaries of the Army, Navy and Air Force to submit an engineering master plan for optimal placement and consolidation of facilities and major equipment, as well as an investment strategy addressing the facilities, major equipment and infrastructure requirements of depots under the jurisdiction of each service. These plans are to include a life cycle cost analysis to modernize depot facilities and equipment and an investment strategy. The Army, Navy, Air Force, and Marine Corps have all begun to develop depot optimization plans, and officials told us that they expect to complete work on these initial plans by the February 2019 date directed by Congress. However, material management command officials also noted that more detailed plans – that include workflow optimization, analysis of supporting utilities, and long-term investment planning – would not be possible by that date. Instead, officials intend to use the initial phase to develop a strategy for completing their final plans. Officials told us that they are using this initial development effort to identify the work needed to fully establish their depot optimization plans, identify the resources and expertise needed for implementation, and develop a timeline for completion. Depot optimization is a challenging effort that involves complex tasks such as, according to service officials, understanding interdependencies between facilities, equipment, and utilities; accounting for environmental, geographic, and economic factors; planning for facility construction and equipment purchases years in advance; and making arrangements for ongoing depot-level maintenance operations while facility and equipment improvements are underway. The Navy developed a Shipyard Infrastructure Optimization Plan, released in February 2018, to address some of its longstanding challenges—including aging facilities and equipment, inefficient layouts, and lack of capacity. Officials estimate that the effort will cost $21 billion over 20 years, and will allow for increased repair capacity. Over time, the Navy estimates that this investment could ultimately save more than 328,000 labor days annually in reduced transportation and materiel movement time. We have a separate review examining the Navy’s effort to optimize its shipyards, which examines its use of results-oriented elements. However, based on our discussions with officials from all four services, the depot plans for the Army and Marine Corps depots and arsenals, the Navy Fleet Readiness Centers, and the Air Force Air Logistics Complexes currently under development will lack certain key elements identified in our prior work, including: Analytically-based goals. The services have not fully established analytically-based goals for their depot investments that are tied to the service’s operational needs. For example, Army and Air Force officials told us that they were still in the process of developing goals for their plans. Meanwhile, Navy aviation officials had developed some initial goals, but expected these goals to change as their planning and information became more detailed. The Marine Corps is in the process of developing its plan, but officials say that they have not determined what analytically-based goals will serve as the foundation of their efforts. Some have told us that the only goal that is feasible by the February 2019 deadline is to plan to develop a better plan. Our prior work has shown that establishing analytically-based goals that define the desired outcomes and results is a leading practice that can enhance the success of an initiative. Results-oriented metrics. As we noted earlier, planners lack key data critical for developing investment plans, such as the source and extent of facilities- and equipment-related maintenance delays. Army, Navy, Air Force, and Marine Corps officials all noted that they were planning to use metrics to determine the effectiveness of their respective plans. However, without established goals for their plans, the services cannot identify the best ways to measure progress in meeting those goals. In addition, the Army, Navy, and Air Force do not have metrics that tie their depot investments to specific outcomes, such as increased performance or improved readiness. Our prior work has shown that using results-oriented metrics enables effective monitoring and facilitates targeting efforts to those with the greatest effect. Identification of required resources, risks, and stakeholders. Army, Navy, Air Force, and Marine Corps officials told us that they have begun identifying the resources needed for their plans. For example, all services have identified at least some of the project costs that will be needed for certain depot facility and equipment improvements. However, without having analytically-based goals to serve as a starting point, it is impossible to fully identify the required resources and risks because the desired end state has not been established. Meanwhile, Army, Air Force, and Navy aviation officials have identified many stakeholders that they intend to involve in their optimization efforts, though in some cases these stakeholders have not been included in the process. Service officials also noted that in some cases they lack the necessary engineering expertise to redesign their depot’s workflow process from the ground up. The services have identified about $6.5 billion in backlogged restoration and modernization projects for their depot facilities. However, this figure is likely under stated because our prior work has shown that depot facility projects are subject to factors such as regulatory compliance and historical preservation costs that can be hard to predict. Moreover, the services track their backlog of needed facility improvements differently, which makes it difficult to determine the full scope of investment required and to provide effective oversight. Our prior work has shown that fully identifying 1) the resources required to achieve the goals, 2) the stakeholders that have equities and requisite expertise in the effort, and 3) potential risks to the effort are leading results-oriented practices that are key to success. Reporting on progress. Army, Navy, Air Force, and Marine Corps officials told us that they are in the process of developing one-time reports for Congress on the depots’ investment needs. However, these one-time reports will not provide Congress and decision makers with information after their initial release. Depot optimization planning will require time, along with sustained management and congressional attention to successfully implement. For example, the Navy’s Shipyard Optimization Plan estimates that it will be a 20-year effort requiring around $21 billion. However, the other initial steps taken by the services to address the congressional request are not as focused on the long term. For example, Army and Air Force officials told us that their initial plans will likely be “plans to get to a plan” rather than a decades-long proposal like the Navy shipyards. Our prior work has shown that reporting on progress is a leading results-oriented practice that holds the organization accountable for results and provides information to senior leaders and Congress that can help keep an effort on track and responsive to changes. According to service officials, the military services’ depot optimization plans will not include all the elements of a results-oriented management approach because there is no requirement that the plans do so. Our prior work has found that a results-oriented management approach can help organizations remain operationally effective, efficient, and capable of meeting future requirements. Specifically, our work has highlighted the importance of elements such as developing analytically-based goals; using results-oriented metrics to monitor progress; fully identifying required resources, risks, and stakeholders; and regular reporting on progress to making reform efforts more efficient, effective, and accountable. Congress directed the services to include some results- oriented elements in their plans, such as an identification of key steps and an initial report to Congress. However, including these additional elements—establishing results-oriented metrics; identifying all necessary resources, stakeholders, and associated risks; and regular reporting to decision makers and Congress—would further enhance the effectiveness of the plans. Without a plan that includes all the key elements of a results- oriented management approach, the services risk continued deterioration of the depots and making suboptimal investments that could hinder their ability to efficiently and effectively support readiness. The Office of the Secretary of Defense Does Not Provide Oversight of or Report on Service Efforts to Invest in Depots DOD has not developed a process to oversee the implementation of the services’ depot optimization plans or provide reporting on depot investment effectiveness to DOD decision makers and Congress. Officials with the Deputy Assistant Secretary of Defense for Materiel Readiness stated that their role is to advocate for the service depots within DOD, and not to develop depot policies or review service depot investments. Specifically, they stated that they are unable to set infrastructure policy and do not have authority to alter service investment decisions. However, as part of an office reorganization during the summer of 2018, the Secretary of Defense tasked the Assistant Secretary of Defense for Sustainment with developing logistics and maintenance policy. organizations have successfully used a results-oriented management approach—which includes regular monitoring and reporting—to oversee the department-wide efforts to drive significant improvements. For example, officials with the Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness created a Comprehensive Inventory Management Improvement Plan in 2010 that DOD used to improve data collection, develop standardized metrics, and provide increased oversight (see sidebar). The result was that DOD was able to achieve a number of improvements, such as reducing the value of its on-hand excess inventory by about $2 billion, improving policy and guidance, and establishing standardized metrics for monitoring its operations. Based on these positive results, DOD institutionalized this process through guidance and has continued to use it since 2010. Using this approach, DOD was ultimately able to improve its inventory management processes enough to have it removed from GAO’s High Risk List in 2017. 2. The team of experts assessed the data sources and methods used by the services and DLA and evaluated potential department- wide metrics for measuring demand forecasting accuracy based on the available data sources. 3. DOD implemented the standardized metrics in a phased approach with the initial phase focused on establishing a baseline for the metrics. Through the process of establishing these metrics, DOD developed additional areas for exploration and improvement, such as improving its guidance on demand forecasting. DOD does report some depot information to Congress; however, the information reported is limited in nature and does not address key issues concerning depot facilities and equipment. For example, every other year DOD is required to report to Congress on its core depot-level maintenance and repair capability requirements and workload. DOD must also report annually on the percentage of depot maintenance funds expended during the preceding fiscal year and projected to be expended during the current and ensuing fiscal year, for performance of depot-level maintenance and repair workloads by the public and private sectors. Combined with the services’ reporting on their depot investment spending (see appendix I), this information provides Congress with some information about depot operations and performance. However, these reports do not inform Congress about several key points, including whether the service depots are becoming more effective and efficient or the extent to which DOD has managed to address depot investment backlogs. We have noted in prior work that the backlog of facilities restoration and modernization projects at the depots can be significant, and that reducing these backlogs will likely take a sustained effort over many years. Furthermore, these efforts are important to improving the effectiveness and efficiency of the depots, which is important to ensuring the readiness of military forces. Improving readiness is one of DOD’s top priorities. Specifically, the Secretary of Defense issued a memorandum in September 2018 about improving readiness which set a minimum target of 80 percent mission capability for DOD’s key aviation platforms starting in fiscal year 2019. In addition, the memorandum identified reducing operating and support costs for these platforms every year beginning in fiscal year 2019 as another priority. Furthermore, DOD has more broadly identified rebuilding readiness as a priority across all the services. As noted previously, the depots are essential to providing readiness to DOD in the form of repaired weapon systems, and depot optimization efforts can provide a return on investment in the form of reduced maintenance time and cost. However, the investments made at the depots—which are crucial for optimization, throughput, and ultimately readiness—often need years and millions of dollars to execute, which means that long-term planning is essential to ensuring that investments are made effectively. Regular monitoring of the services’ depot investment efforts could ensure that these investments target readiness drivers to produce the greatest effect. Furthermore, our previous work has noted that timeframes for improvement efforts can slip, which makes reporting to DOD decision makers and Congress essential for holding stakeholders accountable for making progress. For example, we reported in 2017 that even though the Navy had developed capital investment plans in 2013 and 2015 intended to help improve the state of the facilities and equipment at the shipyards, backlogged restoration and maintenance projects had grown by 41 percent over 5 years which extended the amount of time required to clear the backlog under expected funding levels. Without providing oversight of and reporting on service depot investments, DOD risks continued deterioration of the depots’ facilities and equipment, suboptimal investments, and reduced military readiness as the services experience costly maintenance delays. Conclusion DOD’s 21 depots are critical for repairing and maintaining its complex array of weapon systems. Inefficient depots contribute to longer maintenance times, increased costs, and reduced readiness. Currently, a majority of the depots have facilities that are in poor condition and are relying on old equipment that is past its useful service life. The military services spend millions of dollars annually on depot facilities and equipment in order to meet minimum investment requirements designed to sustain depot performance. Notwithstanding these expenditures, the services are not consistently required to track maintenance delays caused by facility or equipment conditions. This lack of tracking hinders the services’ ability to target investments to facility and equipment needs that would have the greatest effect on repair times or other performance goals. By knowing how often facility and equipment conditions lead to work delays, the services could reduce the risk of investing in less critical facilities and equipment. They could also reduce the risk of more work stoppages caused by facility or equipment conditions. The military services are in the midst of developing congressionally- directed depot optimization plans that are expected to include both 1) an analysis of the cost of depot facilities and equipment modernization and 2) an investment strategy. However, with the exception of the plan designed to address the Navy shipyards, the services’ plans are still in the initial stages, and each one is expected to lack key elements of a results-oriented management approach—including analytically based goals, results-oriented metrics, full identification of required resources and risks, and regular reporting on progress—that would help guide investment. As the shipyard optimization plan has demonstrated, the cost of optimization may be high and, once defined, will require sustained management attention over many years to carry out successfully. In addition, implementing a regular monitoring and reporting process to provide oversight and accountability over depot investments would further enhance DOD’s ability to attain improvements at the depots significant enough to reverse years of decline and reach the challenging goals set by the Secretary of Defense for improving mission capability rates and reducing operating and support costs. Recommendations for Executive Action We are making the following 13 recommendations to the Department of Defense. The Secretary of the Army should ensure that Army Materiel Command establishes measures for its depots to track facility or equipment conditions that lead to maintenance delays. (Recommendation 1) The Secretary of the Army should ensure that Army Materiel Command implements tracking of the measures for identifying when facility or equipment conditions lead to maintenance delays at each Army depot. (Recommendation 2) The Secretary of the Navy should ensure that Naval Sea Systems Command and the Commander, Fleet Readiness Centers establish measures for their depots to track facility or equipment conditions that lead to maintenance delays. (Recommendation 3) The Secretary of the Navy should ensure that Naval Sea Systems Command and the Commander, Fleet Readiness Centers implement tracking of the measures for identifying when facility or equipment conditions lead to maintenance delays at each Navy depot. (Recommendation 4) The Secretary of the Air Force should ensure that Air Force Materiel Command establishes measures for its depots to track facility or equipment conditions that lead to maintenance delays. (Recommendation 5) The Secretary of the Air Force should ensure that Air Force Materiel Command implements tracking of the measures for identifying when facility or equipment conditions lead to maintenance delays at each Air Force depot. (Recommendation 6) The Commandant of the Marine Corps should ensure that Marine Corps Logistics Command establishes measures for its depots to track facility or equipment conditions that lead to maintenance delays. (Recommendation 7) The Commandant of the Marine Corps should ensure that Marine Corps Logistics Command implements tracking of the measures for identifying when facility or equipment conditions lead to maintenance delays at each Marine Corp depot. (Recommendation 8) The Secretary of the Army should ensure that Army Materiel Command incorporates in its depot optimization plan, key results-oriented elements including analytically-based goals, results-oriented metrics, identification of required resources, risks, and stakeholders, and regular reporting to decision makers on progress. (Recommendation 9) The Secretary of the Navy should ensure that Commander, Fleet Readiness Centers incorporates in its depot optimization plan, key results-oriented elements including analytically-based goals, results- oriented metrics, identification of required resources, risks, and stakeholders, and regular reporting to decision makers on progress. (Recommendation 10) The Secretary of the Air Force should ensure that Air Force Materiel Command incorporates in its depot optimization plan, key results-oriented elements including analytically-based goals, results-oriented metrics, identification of required resources, risks, and stakeholders, and regular reporting to decision makers on progress. (Recommendation 11) The Commandant of the Marine Corps should ensure that Marine Corps Logistics Command incorporates in its depot optimization plan, key results-oriented elements including analytically-based goals, results- oriented metrics, identification of required resources, risks, and stakeholders, and regular reporting to decision makers on progress. (Recommendation 12) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Sustainment develops an approach for managing service depot investments that includes management monitoring and regular reporting to decision makers and Congress on progress. (Recommendation 13) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In written comments on a draft of this report (reproduced in appendix XXIV), DOD concurred with 12 of our 13 recommendations and stated, in general, that the Service Chiefs for the Army, Navy, Air Force, and Marine Corps will ensure that their respective material commands take actions to implement the recommendations for their service. DOD also provided technical comments, which we incorporated where appropriate. DOD did not concur with our recommendation that the Assistant Secretary of Defense for Sustainment (ASD for Sustainment) develop an approach for managing service depot investments. In its response, DOD stated it could not develop such an approach until the services finalized and resourced depot optimization plans. DOD stated it would continue to monitor capital investments at service depots through the budget process. We continue to believe that the ASD for Sustainment should develop an approach for managing service depot investments that includes management monitoring and regular reporting to decision makers and Congress on progress for several reasons. First, our recommendation is focused on the ASD for Sustainment developing an approach for overseeing the services’ overall depot investments, not just those contained in their optimization plans. While the depot optimization plans will certainly affect the services’ depot investments, the depots will require additional investments to sustain, restore, and modernize their operations apart from their efforts to optimize facility layout and workflow. Second, the ASD for Sustainment’s early involvement in the services’ development and resourcing of depot optimization plans could enhance service efforts to identify appropriate analytically-based goals aligned with the Secretary of Defense’s readiness objectives, enhance optimization across the DOD enterprise, and ensure sustained senior leadership attention to achieving optimal depot efficiency and effectiveness. Waiting until the services’ depot optimization plans have been resourced – that is, funded – could result in the ASD for Sustainment beginning its involvement and oversight after critical optimization decisions, such as setting goals, identifying key metrics, and adjudicating trade-offs across the depot enterprise, have been made on an individual basis by the services. Third, while monitoring investments at the service depots through the budget process is an important aspect of oversight, the ASD for Sustainment could enhance the oversight of and accountability over depot investments through a more comprehensive oversight approach. This comprehensive approach could include regular monitoring that focuses on ensuring that approved depot investment funding is implemented as planned and achieves desired results. An approach focused on the implementation of efforts aimed at desired outcomes could better position DOD and the services to make sustained progress. Finally, having regular reporting of progress will help ensure DOD leadership and the Congress have the information needed to help make critical funding and policy decisions. Reporting on progress towards desired outcomes also could assist in ensuring that there is accountability within the department for reversing years of decline and reaching the challenging goals set by the Secretary of Defense for improving mission capability rates and reducing operating and support costs. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, and the Secretaries of the Army, Navy, Air Force, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at maurerd@gao.gov or (202) 512-9627. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XXV. Appendix I: Service Depot Investment Has Generally Met Statutory Requirements Military Departments Generally Have Met the 6 Percent Rule Based on our analysis of service budget submissions and 6 percent project lists, we found that the departments have generally met the 6 percent requirement in fiscal years 2007 through 2017 (see fig. 13). As shown above, the Navy and Air Force met the minimum requirement every year since the minimum investment requirement was enacted in fiscal year 2007. The Army met the minimum investment requirement for most years, but did not meet the minimum on two occasions, in fiscal year 2011 and fiscal year 2013. According to Army officials, they missed the fiscal year 2011 minimum by around $21 million due to a software project that was scheduled to execute in fiscal year 2011, but was unable to execute and moved to fiscal year 2012 instead. An Army official attributed the difference in fiscal year 2013, which was over $68 million, to the effects of fiscal year 2013 sequestration, which generally reduced funding available to the services. While the Navy met its minimum investment requirement every year, it is worth noting that the 6 percent rule measures compliance by department. Therefore, the Navy’s reported investments include those for its four shipyards, its three fleet readiness centers, and the two Marine Corps depots. From fiscal year 2007 through fiscal year 2017, the shipyards accounted for 76 percent of Navy depot investment (see fig. 14). If these three organizations were viewed independently, only the shipyards would have regularly met their minimum investment requirement; the fleet readiness centers and Marine Corps depots have generally invested less than 6 percent of their respective maintenance, repair, and overhaul workload, as shown in figure 15. Under this perspective, the fleet readiness centers would only have met the 6 percent minimum in fiscal years 2008 and 2012, and the Marine Corps depots would never have met the 6 percent minimum. Military Department Compliance with Fiscal Year 2012 Change to Prohibit Facility Sustainment The services have counted some facilities sustainment activities towards meeting the 6 percent minimum since fiscal year 2012, but the effect of these activities on the departments’ ability to meet the minimum investment requirement appears minimal. In fiscal year 2012, Congress revised 10 U.S.C. § 2476 to prohibit the services from counting sustainment activity towards meeting their 6 percent investment minimum. Sustainment activities are defined as the regular activities needed to keep a facility in good working order. We requested project documentation from each of the services for a number of the investments that they counted towards their 6 percent minimum. Army officials were only able to provide us with about one-third of our requested project documentation (46 out of 158 projects requested); as a result, our assessment of the Army is limited. Of the project documentation we did receive, we found sustainment activities accounted for 13 projects totaling about $21 million in nominal dollars from fiscal year 2012 through fiscal year 2017. Those projects represent approximately 1 percent of the Army’s total depot investment over that time. The Army’s compliance with the 6 percent rule would not have been affected if those projects had been properly excluded. Navy and Marine Corps officials were able to provide project documentation for 172 out of 211 projects requested. Navy sustainment activities accounted for 47 projects totaling about $94 million in nominal dollars from fiscal year 2012 through fiscal year 2017. Those projects represent about 3 percent of the Navy’s total depot investment over that time. If those projects had been properly excluded, the Navy would still have met its 6 percent minimum for each fiscal year. Finally, Air Force officials were able to provide project documentation for 136 out of 138 projects requested. Air Force sustainment activities accounted for 51 projects totaling about $45 million in nominal dollars from fiscal year 2012 through fiscal year 2017. Those projects represent about 2 percent of the Air Force’s total depot investment over that time. If those projects had been properly excluded, the Air Force would still have met its 6 percent investment minimum for each fiscal year. Mission Anniston specializes in tracked and wheeled vehicles, artillery, bridging equipment, small arms, and other items. Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, about $196 million – nearly 15% – went to Anniston. Anniston Facilities Restoration and Modernization Backlog As of fiscal year 2017, Anniston has identified about $38 million in backlogged restoration and modernization projects. Mission Corpus Christi specializes in helicopters (AH-64, AH-1, CH-47, OH-58, UH-60, and UH-1), engines, and associated systems and subsystems. Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, about $311 million – over 23% – went to Corpus Christi. Corpus Christi Facilities Restoration and Modernization Backlog As of fiscal year 2017, Corpus Christi has identified about $25 million in backlogged restoration and modernization projects. Army Depot Investment Letterkenny Depot Investment Army Depot Investment Pine Bluff Depot Investment Pine Bluff Facilities Restoration and Modernization Backlog As of fiscal year 2017, Pine Bluff has identified about $7 million in backlogged restoration and modernization projects. Mission Red River specializes in tactical wheeled vehicles—including Mine Resistant Ambush Protected (MRAP) vehicles, High Mobility Multipurpose Wheeled Vehicles (HMMWV), Family of Medium Tactical Vehicles (FMTV), Bradley Fighting Vehicles, and the Multiple Launch Rocket System (MLRS). Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, about $227 million – nearly 17% – went to Red River. Red River Facilities Restoration and Modernization Backlog Red River did not provide any data on their backlog of restoration and modernization projects. Mission Rock Island houses the Joint Manufacturing and Technology Center, which has been designated the Center of Industrial and Technical Excellence for mobile maintenance equipment such as the Forward Repair System. It is also the sole Army location for assembling recoil mechanisms (such as those on howitzers). Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, about $59 million – over 4% – went to Rock Island. Rock Island Facilities Restoration and Modernization Backlog Rock Island did not provide any data on their backlog of restoration and modernization projects. Mission Tobyhanna specializes in command, control, communications, computers, intelligence, surveillance and reconnaissance systems, electronics, avionics, and missile guidance and control systems. Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, about $279 million – nearly 21% – went to Tobyhanna. Tobyhanna Facilities Restoration and Modernization Backlog As of fiscal year 2017, Tobyhanna has identified about $43 million in backlogged restoration and modernization projects. Mission Tooele specializes in ammunition logistics (storage, shipping, sorting, and inspecting), as well as production of related equipment needed for ammunition maintenance and demilitarization. Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, about $84 million – over 6% – went to Tooele. Tooele Facilities Restoration and Modernization Backlog As of fiscal year 2017, Tooele has identified about $21 million in backlogged restoration and modernization projects. Mission Watervliet specializes in cannons, mortars, and associated components, as well as machining and fabrication services. Of the $1.6 billion spent by the Army on depot investment between fiscal year 2012 and fiscal year 2017, $309 million was spent on projects that benefited multiple depots. Of the remaining $1.34 billion, $87 million – about 6% – went to Watervliet. Watervliet Facilities Restoration and Modernization Backlog As of fiscal year 2017, Watervliet has identified about $36 million in backlogged restoration and modernization projects. Mission Norfolk Naval Shipyard specializes in nuclear aircraft carriers (Nimitz class), submarines (Los Angeles- class and Ohio-class), and various surface combatants (CGs, LHDs, LPDs, LCCs, FFGs, and AS Tenders). Of the $2.4 billion spent by the four shipyards on depot investment between fiscal year 2012 and 2017, $557 million—about 23%— was spent on Norfolk Naval Shipyard. Norfolk Naval Shipyard Facilities Restoration and Modernization Backlog Norfolk Naval Shipyard identified about $1.46 billion in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts that have been identified but not yet executed. Mission Pearl Harbor Naval Shipyard specializes in nuclear submarines (Los Angeles-class and Virginia- class) and surface combatants (CGs, DDGs, LPDs, FFGs, and AS Tenders). Navy Depot Investment Of the $2.4 billion spent by the four shipyards on depot investment between fiscal year 2012 and 2017, $458 million—about 19%— was spent on Pearl Harbor Naval Shipyard. Pearl Harbor Naval Shipyard Facilities Rrestoration and Modernization Backlog Pearl Harbor Naval Shipyard identified about $1.69 billion in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts that have been identified but not yet executed. Mission Portsmouth Naval Shipyard specializes in nuclear submarines (Los Angeles-class and Virginia- class). Navy Depot Investment Of the $2.4 billion spent by the four shipyards on depot investment between fiscal year 2012 and 2017, about $568 million—about 23%—was spent on Portsmouth Naval Shipyard. Portsmouth Naval Shipyard Facilities Restoration and Modernization Backlog Portsmouth Naval Shipyard identified about $761 million in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts that have been identified but not yet executed. Mission Puget Sound specializes in nuclear carriers (Nimitz class), submarines (Los Angeles-class, Seawolf-class, and Ohio-class), and surface combatants (DDG-51 class). Navy Depot Investment Of the $2.4 billion spent by the four shipyards on depot investment between fiscal year 2012 and 2017, $841 million—about 35%— was spent on Puget Sound Naval Shipyard. Puget Sound Naval Shipyard Facilities Restoration and Modernization Backlog Puget Sound Naval Shipyard has identified about $1.49 billion in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts that have been identified but not yet executed. Mission FRC East specializes in helicopters (AH-1, CH-53E, MH-53E, UH-1Y), airplanes (AV-8B and EA-6B), fighter aircraft (F/A-18 A, C, and D variants), the MV-22 Osprey, and various engines and components. Of the $526 million spent by the three FRCs on depot investment between fiscal year 2013 and fiscal year 2017, $199 million, about 38%, was spent on projects that benefited FRC East. FRC East Facilities Restoration and Modernization Backlog FRC East identified about $198 million in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts which have been identified but not yet executed. Mission FRC Southeast specializes in helicopters (MH-60R and S) Aircraft (C-2A and E-2 C and D, EA-6B, P-3), fighter aircraft (F-35, F/A-18 A-F variants), trainers (T-6, T-34, T-44), and various components. Of the $526 million spent by the three FRCs on depot investment between fiscal year 2013 and fiscal year 2017, $197 million, about 37%, was spent on projects that benefited FRC Southeast. FRC Southeast Facilities Restoration and Modernization Backlog FRC Southeast identified about $124 million in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts which have been identified but not yet executed. Mission FRC Southwest specializes in helicopters (AH-1, CH-53E, HH- 60, MH-60, and UH-1Y), airplanes (C-2A, E-2C, E-2D, and EA-18G), fighter aircraft (F/A-18 A-F variants), the MV-22 Osprey, and various engines and components. Of the $526 million spent by the three FRCs on depot investment between fiscal year 2013 and fiscal year 2017, $131 million, about 25%, was spent on projects that benefited FRC Southwest. FRC Southwest Facilities Restoration and Modernization Backlog FRC Southwest identified about $53 million in backlogged restoration and modernization (R&M) projects in fiscal year 2017. The Navy defines backlog as R&M efforts which have been identified but not yet executed. Mission Ogden specializes in depot level maintenance for fighter aircraft (F- 35, F-22, F-16, A-10), cargo aircraft (C-130), testers (T-38), other weapons systems (Minuteman III ICBM), and software. Of the $2.1 billion spent by the Air Force on depot investment between fiscal year 2012 and fiscal year 2017, $717.1 million, or 34%, went to the Ogden ALC. Ogden ALC Facilities Restoration and Modernization Backlog As of fiscal year 2017, Ogden ALC has identified about $259 million in backlogged restoration and modernization projects. Backlog is calculated as the difference between programmed requirements and funded requirements in the Complex’s annual budgets. Mission Oklahoma City specializes in depot level repair of bombers (B-1B, B- 52), tankers (KC-135), E-3 Sentry, multiple engine systems, and software. Of the $2.1 billion spent by the Air Force on depot investment between fiscal year 2012 and fiscal year 2017, $1.0 billion – nearly half – went to the Oklahoma City ALC. Oklahoma City ALC Facilities Restoration and Modernization Backlog As of fiscal year 2017, Oklahoma City ALC has identified about $104 million in backlogged restoration and modernization projects. The backlog is calculated as the difference between total programmed requirements and funded requirements in the Complex’s annual budgets. Mission Warner Robins specializes in maintenance of cargo aircraft (C- 130, C-5, C-17), fighter aircraft (F- 15), aviation electronics, and software systems. Of the $2.1 billion spent by the Air Force on depot investment between fiscal year 2012 and fiscal year 2017, $358 million – 17% – went to the Warner Robins ALC. Warner Robins ALC Facilities Restoration and Modernization Backlog As of fiscal year 2017, Warner Robins has identified about $190 million in backlogged restoration and modernization projects. The backlog is calculated as the difference between total programmed requirements and funded requirements in the Complex’s annual budgets. Mission Albany specializes in Amphibious Assault Vehicles (AAV), Light Armored Vehicles (LAV), High Mobility Multipurpose Wheeled Vehicles (HMMWV), Mine Resistant Ambush Protected (MRAP) vehicles, Medium Tactical Vehicle Replacements, communications/electronics equipment, and small arms. Marine Corps Depot Investment Of the approximately $111 million spent by the Marine Corps on depot investment between fiscal year 2012 and fiscal year 2017, $66 million, about 59%, was spent on projects that benefited Albany Production Plant. Albany Facilities Restoration and Modernization Backlog As of fiscal year 2017, Albany Production Plant has identified about $12 million in backlogged restoration and modernization projects. Mission Barstow specializes in Amphibious Assault Vehicles (AAV), Light Armored Vehicles (LAV), High Mobility Multipurpose Wheeled Vehicles, Mine Resistant Ambush Protected (MRAP) vehicles, Medium Tactical Vehicle Replacements (MTVR), howitzers, and communications/electronics equipment. Marine Corps Depot Investment Of the approximately $111 million spent by the Marine Corps on depot investment between fiscal year 2012 and fiscal year 2017, $45 million, about 41%, was spent on projects that benefited Barstow Production Plant. Barstow Facilities Restoration and Modernization Backlog As of fiscal year 2017, Barstow Production Plant has identified about $2 million in backlogged restoration and modernization projects. For each of these locations, we collected and analyzed data such as facility condition rating, facility age, number of facility repairs, equipment age, number of equipment repairs, restoration and modernization backlog, work stoppages due to facility and equipment conditions, depot investment projects, and depot performance metrics including on-time delivery and delayed maintenance days. Whenever possible, we collected data from fiscal year 2007 – the year in which the 6 percent rule was first enacted – to fiscal year 2017, the latest for which most data were available. the General Fund Enterprise Business System for data on facility and equipment repairs and investment projects from fiscal year 2007 through fiscal year 2017; the Defense Industrial Financial Management System for data on Air Force age of equipment for fiscal year 2017; the Logistics Modernization Program for data on Army depot performance from fiscal years 2014 to 2017, investment projects, and equipment repairs from fiscal year 2007 through fiscal year 2017; the Navy Modernization Process for data on Navy shipyard performance from fiscal years 2007 to 2017; Production Status Reporting for data on Navy aviation depot performance from fiscal years 2007 to 2017; the Aircraft/Missile Maintenance Production/Compression Report for data on Air Force depot performance from fiscal years 2007 to 2017; and the Master Scheduling Support Tool for data on Marine Corps depot performance from fiscal years 2007 to 2017. We found the data that we used from these systems to be sufficiently reliable for the purposes of summarizing trends in the selected facility and equipment metrics reported. To determine the extent to which the services track data on maintenance delays caused by facilities and equipment conditions, we requested data on work stoppages related to facilities and equipment conditions at the depots. We also spoke with service officials about delays and work stoppages and the ability of the services to collect this data, and the extent to which they used delay and work stoppage data to target their investments. We did not assess the reliability of any work stoppage data, as we are not reporting this data. Marine Corps Logistics Base Albany Headquarters, Department of the Air Force Air Force Material Command Air Force Sustainment Center To determine the extent to which DOD and the services have developed an approach for guiding depot investments to address key challenges, we discussed with service depot and materiel command officials the depot investment process, the existence of investment plans at the DOD, service, or depot levels, and any challenges in meeting service operational needs resulting from inadequate investment. We also reviewed service documentation on current and future investment plans and analyzed the depots’ processes guiding investment decisions to determine whether these included any elements of a results-oriented management approach. Our previous work has highlighted the importance of a results-oriented management approach to effective operations and investment at various organizations, including defense logistics. the last year for which projects were available. We compared those lists with the services’ actual reported 6 percent spending in their respective budget justification books (specifically, the Fund-6 Report), and reconciled any differences. We then identified facility projects that cost $250,000 and above with the potential for sustainment activities. First, an analyst recorded his assessment of whether a project might include sustainment activity. A second analyst independently reviewed the same information and recorded her assessment. The two analysts created a final assessment that reconciled their two independent assessments and reflects their consensus. This sample is not generalizable to all service projects, but was chosen to identify the projects most likely to affect compliance with the 6 percent rule. We then requested and collected additional project documentation, such as project proposals, for those projects that both analysts agreed had the potential to include sustainment activities. Using this more detailed project documentation, an analyst recorded his assessment of whether a project included sustainment activity. A second analyst independently reviewed the same information and recorded her assessment of whether the project included sustainment activity. The two analysts created a final assessment that reconciled their two independent assessments and reflects their consensus. We then shared the results of our review to obtain the services’ perspectives. In some cases, the services provided additional information about a project that led us to revise our initial determination, such as noting that a particular project was conducted as a result of severe weather damage (which is considered restoration, even if the activity would otherwise be considered sustainment). For the Air Force and Navy shipyards, our final determination of sustainment projects – as presented in summary in appendix I – was consistent with the services’ respective determinations of which projects included sustainment activity. We presented these amounts using nominal, non-inflation adjusted dollars, in order that the comparison with that year’s 6 percent minimum reporting would be comparable. Officials from the Marine Corps and Navy aviation command did not agree with our determination that one and three of the reviewed projects, respectively, included sustainment activity. The Army did not provide a response to most of our sustainment determinations. We conducted this performance audit from August 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix XXIV: Comments from the Department of Defense Appendix XXV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report are Suzanne Wren, (Assistant Director), James Lackey (Analyst in Charge), Andrew Duggan, Amie Lesser, Felicia Lopez, Michael Perkins, Carol Petersen, Michael Silver, John E. “Jet” Trubey, Britney Tsao, and Lillian Yob. Related GAO Products DOD Depot Workforce: Services Need to Assess the Effectiveness of Their Initiatives to Maintain Critical Skills. GAO-19-51. Washington, D.C.: December 14, 2018. Navy Readiness: Actions Needed to Address Costly Maintenance Delays Facing the Attack Submarine Fleet. GAO-19-229. Washington, D.C.: November 19, 2018. Air Force Readiness: Actions Needed to Rebuild Readiness and Prepare for the Future. GAO-19-120T. Washington, D.C.: October 10, 2018. Weapon System Sustainment: Selected Air Force and Navy Aircraft Generally Have Not Met Availability Goals, and DOD and Navy Guidance Need to Be Clarified. GAO-18-678. Washington, D.C.: September 10, 2018. Military Readiness: Analysis of Maintenance Delays Needed to Improve Availability of Patriot Equipment for Training. GAO-18-447. Washington, D.C.: June 20, 2018. Navy Shipbuilding: Past Performance Provides Valuable Lessons for Future Investments. GAO-18-238SP. Washington, D.C.: June 6, 2018. Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. GAO-17-809T. Washington, D.C.: September. 19, 2017. Naval Shipyards: Actions Needed to Improve Poor Conditions that Affect Operations. GAO-17-548. Washington, D.C.: September 12, 2017. Navy Shipbuilding: Policy Changes Needed to Improve the Post-Delivery Process and Ship Quality. GAO-17-418. Washington, D.C.: July 13, 2017. Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017. Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016. Defense Inventory: Further Analysis and Enhanced Metrics Could Improve Service Supply and Depot Operations. GAO-16-450. Washington, D.C.: June 9, 2016. Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. Defense Inventory: Actions Needed to Improve the Defense Logistics Agency’s Inventory Management. GAO-14-495. Washington, D.C.: June 19, 2014. DOD’s 2010 Comprehensive Inventory Management Improvement Plan Addressed Statutory Requirements, But Faces Implementation Challenges. GAO-11-240R. Washington, D.C.: January 7, 2011.
Why GAO Did This Study The military services' 21 depots maintain the readiness of critical weapon systems such as ships, aircraft, and tanks needed for military operations. The condition of depot facilities and equipment directly affects the timeliness of maintenance and the readiness of the weapon systems they repair. The services have invested over $13 billion in the depots from fiscal year 2007 to fiscal year 2017. Senate Report 115-125 included a provision for GAO to examine the services' investment in and performance of their depots. GAO evaluated (1) the condition of depot facilities and equipment, their relationship to depot performance, and the services' tracking of the relationship to depot performance and (2) the extent to which DOD and the services have developed an approach for guiding depot investments to address key challenges. GAO also provides an overview summary for each depot. GAO reviewed data from fiscal years 2007 through 2017 on depot investment, performance, and the age and condition of facilities and equipment; reviewed agency guidance; and interviewed DOD, service, and depot officials. What GAO Found The condition of facilities at a majority of the Department of Defense's (DOD) depots is poor and the age of equipment is generally past its useful life, but the services do not consistently track the effect that these conditions have on depot performance. Twelve of the 21 depots GAO reviewed––more than half––had “poor” average facility condition ratings (see figure). Some facilities also serve functions for which they were not designed, reducing their efficiency. In addition, the average age of depot equipment exceeded its expected useful life at 15 of the 21 depots. These factors contributed, in part, to a decline in performance over the same period. From 2007 to 2017, performance at the depots generally declined, reducing the availability of the weapon systems repaired for training and operations. Optimizing facilities and equipment at the depots can improve their maintenance efficiency. For example, the Navy estimates that its shipyard optimization effort will save over 325,000 labor days per year, which would allow an additional submarine overhaul annually. However, the services lack data on the effect that facilities and equipment conditions have on maintenance delays, hindering DOD's ability to effectively target investments to the highest priorities. DOD and the services' approach for managing investments to improve the efficiency and effectiveness of its depots lacks elements important to addressing key challenges. The services have efforts underway to complete their plans by February 2019 to address their depots' facility and equipment needs. However, GAO found that these plans are preliminary and will not include key elements, such as analytically-based goals; results-oriented metrics; a full accounting of the resources, risks, and stakeholders; and a process for reporting on progress. Addressing the poor conditions at DOD's 21 depots will cost billions and require sustained management attention over many years. However, the DOD office responsible for depot policy does not monitor or regularly report on depot improvement efforts to DOD decision makers and Congress. Until DOD and the services incorporate these key elements into the management approach for their depot investments, they risk continued deterioration of the depots, hindering their ability to meet the Secretary of Defense's goals for improving readiness and reducing operating and support costs. What GAO Recommends GAO is making 13 recommendations to improve data collection on the effect of facilities and equipment condition on depot performance, and develop plans that incorporate key elements to guide depot investments. DOD concurred with 12 recommendations, but did not agree to monitor and report on depot investments. We continue to believe monitoring and reporting will enhance DOD's efforts to improve its depots.
gao_GAO-19-639
gao_GAO-19-639_0
Background FAA air traffic controllers are responsible for guiding aircraft that are departing, landing, and moving around the terminal area at 518 U.S. airports. Airport terminal areas include “movement areas,” such as runways and taxiways, and “non-movement areas” such as ramp areas (see fig. 1). Incidents can occur in either the movement or non-movement area and include: Runway incursions: These incidents involve the incorrect presence of an aircraft, vehicle, or person on a runway. Incursions fall into three categories—pilot deviations, operational incidents, and vehicle or pedestrian deviations—depending on their cause (see fig. 2). Runway excursions: These incidents occur when an aircraft veers off the side, or overruns the end, of a runway. Wrong-surface: These incidents occur when an aircraft lands or departs, or tries to land or depart, on the wrong runway or on a taxiway (see fig. 3). Wrong surface incidents also include when an aircraft lands or tries to land at the wrong airport. Ramp area: These incidents occur when aircraft, vehicles, or people cause damage or injuries in the ramp area. FAA oversees the safety of runways and taxiways and works with partners such as airlines, airports, pilots, and others to improve safety in these areas. FAA’s oversight of ramp areas is generally exercised indirectly through its certification of airports and airlines, which have been more directly responsible for safety in these areas. Several FAA offices—with staff in D.C. headquarters, FAA regional offices, and local district offices—oversee terminal area safety, including: The Air Traffic Organization (ATO) manages air traffic control, validates reports of terminal area incidents, develops and maintains runway safety technology, and leads investigations of operational incidents. ATO also administers the mandatory reporting system, which requires air traffic controllers to report certain incidents, including runway incursions, excursions, and wrong surface landings. ATO’s Runway Safety Group leads and coordinates all FAA terminal area safety efforts. The goal of the Runway Safety Group is to improve runway and taxiway safety by reducing the risk of runway incursions, excursions, and other incidents. The Office of Airports oversees airport-related safety, including inspecting and certifying operations at commercial airports and establishing airport design and safety standards. The Office of Airports also provides grants to airports to help support safety improvements, and leads investigations of incursions caused by vehicle/pedestrian deviation. Office of Aviation Safety investigates aircraft incidents and accidents, sets aviation safety standards, and certifies aircraft and pilots. Office of Aviation Safety, Flight Standards Service (Flight Standards) inspects and certifies airlines, promotes runway safety initiatives, and provides policies and guidance for pilots. Flight Standards also administers a reporting program to obtain information on incidents involving pilots and leads investigations of incursions caused by pilot deviation. Office of Aviation Safety, Accident Investigation and Prevention oversees investigations of terminal area-safety accidents and incidents, a role which includes coordinating with the NTSB, OSHA, and other FAA offices. Runway and taxiway safety has long been a focus of FAA efforts. FAA’s fiscal year 2019-2022 strategic plan establishes four safety initiatives related to its data-driven, risk-based safety oversight approach, known as a Safety Management System (SMS), including two fiscal year 2019 safety initiatives: proactively addressing emerging safety risk by using data-informed approaches to make risk-based decisions, and reducing the risk of runway incursions and wrong surface incidents. Further, FAA’s SMS guides its terminal area oversight. For example, FAA’s order establishing the Runway Safety Program states that FAA use SMS to ensure the safety of the national airspace through evaluations, data tracking, and analysis of incidents to identify new hazards and risks, and to assess existing safety controls. In our 2011 report on FAA’s oversight of terminal area safety, we made three recommendations related to excursions, ramp areas, and information sharing, all three of which FAA has since implemented. FAA Uses Data to Analyze Some Terminal Area Incidents FAA Uses Data to Analyze Runway Incursions FAA uses data from reports and investigations to analyze runway incursions. For example, a team of representatives from the Air Traffic Organization, the Office of Airports, and the Office of Flight Standards, uses information on each incursion to classify its severity into one of four categories—A through D. An example of a category A incursion occurred in June 2018 in Springfield, Missouri, when an aircraft with 53 people on board accelerated for takeoff before noticing an airport operations vehicle crossing the runway. No injuries or damage were reported, but a collision was narrowly avoided. An example of a Category C or D incursion is a pilot entering a runway without authorization, but without significant potential for a collision. FAA reports the rate of severe category A and B incursions to Congress and the public in its annual performance plan. FAA also uses data to analyze runway incursions over time. For example, FAA data show that the number and rate of reported runway incursions nearly doubled from 954 in fiscal year 2011 to 1804 in fiscal year 2018 (see fig. 4). The majority of reported runway incursions (62 percent) were pilot deviations followed by operational incidents (20 percent) and vehicle/pedestrian deviations (18 percent). According to our analysis of FAA data, the increase in reported incursions was largely due to an increase in less severe incursions. Our analysis showed that severe incursions (category A and B) in which there is a significant potential for a collision, are relatively infrequent. Category C and D incursions, in which there is less potential for a collision, are more frequent. According to FAA officials, the increase in less severe incursions may be due to increased reporting of these incidents, which we also noted in our 2011 report on terminal area safety. However, the number and rate of reported runway incursions has continued to steadily increase since then, and may also indicate an increase in the actual occurrence of incidents. In 2017, FAA developed a new metric to analyze excursions and other incidents, as well as incursions. According to FAA officials, the new metric (“Surface Safety Metric”) measures the relative riskiness of terminal area incidents by assigning a different severity weight to each incursion, excursion, or other incident depending on its proximity to a fatal accident. For example, FAA documentation states that the new metric assigns a severity weight of 1 to incidents that result in a fatal injury, 0.6 to incidents with serious injuries, and 0.3 to incidents with minor injuries. Incidents in which there are no injuries are assigned even lower severity weights—for example 0.003 for a category A incursion and 0.002 for a category B incursion. FAA officials said they will analyze these severity weights year-to-year, so they can identify trends in each type of incident and across all incidents. For example, FAA officials noted that despite an increase in the number of runway incursions from fiscal years 2011 through 2018, the estimated risk of these incidents, as measured by their severity weights, declined. FAA has developed new performance goals tied to this metric, which it plans to report to Congress and the public by the end of fiscal year 2019. Duplicate Data May Affect FAA’s Ability to Analyze Excursions FAA has analyzed excursion data through special FAA task teams and other joint industry efforts with airlines, associations, and other government agencies. Excursions occur when an aircraft veers off the side or end of a runway, and can result in serious injury, death, or property damage. For example, on September 27, 2018, a small aircraft slid off the side of the runway at Greenville Downtown Airport in South Carolina shortly after landing. The aircraft continued down a 50-foot cliff, resulting in the deaths of two people. According to data FAA provided to us, nearly 700 excursions were reported in fiscal year 2018. Additionally, several joint industry efforts and special task teams have recently analyzed excursions. For example, the Commercial Aviation Safety Team (CAST), which FAA co-leads, found that about a third of the commercial accidents in the U.S. that resulted in fatalities or irreparable damage to the aircraft from 2006 through 2015 were attributed to runway excursions. In 2013, FAA began collecting additional data on excursions, but our review of FAA’s data found the excursion data FAA has collected since then contain duplicates. In 2011, we found that FAA was not formally tracking runway excursions and recommended that FAA develop a plan to track and assess them, which FAA began doing in 2013. Prior to 2013, FAA collected excursion data from two sources—the NTSB Aviation Accident Database, which contains information gathered during NTSB investigations, and FAA’s own Aviation Safety Information Analysis and Sharing (ASIAS) database, which includes information on incidents that may not reach the level of an NTSB investigation, such as an incident without serious injuries or fatalities. In 2013, FAA began identifying excursions in a third source—mandatory occurrence reports that FAA requires air traffic controllers to file when they observe an incident. FAA officials said that the additional excursions they identified through these mandatory occurrence reports added 15 percent more annual reports to those that they had identified through only the other two sources. However, FAA officials said there are likely duplicate records in their excursion data as a single excursion could be reported in more than one of these three sources. Although we did not have enough identifying information in the excursion data FAA provided to confirm the number of duplicate reports, our analysis of excursion data did identify possible duplicates. Further, despite containing possible duplicates, FAA recently began using these excursion data in its new surface safety metric. Federal standards for internal control state that data should be appropriate, current, complete, and accurate. A 2017 FAA internal analysis also noted the importance of identifying duplicates in order to ensure accurate runway excursion data. FAA officials said that they do not know how many duplicate records there are, and that they do not have an automated way to identify (and remove) all duplicates. FAA officials said that they could manually identify and remove duplicates, but that they do not currently do this nor plan to do so because duplicate excursion records would not affect their assessment of excursion risk. FAA officials said that excursions captured solely by the mandatory occurrence reports tend to be minor, lower-risk events. However, without a process to identify duplicates, FAA is not able to verify that this statement is true, and therefore cannot accurately assess and mitigate the risk excursions pose to terminal area safety. FAA Does Not Use Data to Analyze Ramp Area Incidents FAA does not use data to analyze most ramp area incidents, and does not plan to do so in its new surface safety metric. While the manager of the Runway Safety Group said FAA analyzes fatal ramp accidents through its participation in CAST, it does not analyze non-fatal ramp incidents, which are estimated to occur more frequently. In addition to some airport and airline officials telling us that they likely collect ramp data, FAA’s Runway Safety Group manager said that FAA likely has data on some non-fatal ramp incidents. For example, some air traffic controllers we interviewed said that they would report any ramp area incidents they observed through FAA’s mandatory reporting process, and officials from a pilot association told us they would also report such incidents. However, FAA officials said that FAA does not plan to analyze ramp incidents in the agency’s new surface safety metric. FAA’s Runway Safety Program Manager said that FAA has not analyzed most ramp area incidents because the risk of these incidents is lower than that in other areas, such as runways, and therefore does not merit analysis. For example, the manager said that aircraft speed in the ramp area is generally slower than take-off or landing speed, and fatalities are infrequent. However, we have previously reported that ramp areas are typically small, congested areas in which departing and arriving aircraft are serviced by ramp workers, who include baggage, catering, and fueling personnel. These areas can be dangerous for ground workers and passengers. The Flight Safety Foundation, which has collected its own data on ramp safety, estimated that each year 27,000 ramp accidents and incidents occur worldwide and can be costly due to effects such as damage to aircraft and schedule disruptions. In addition, ramp areas are complex because safety responsibilities in these areas vary by airport and even by terminal. For example, officials at Boston Logan International Airport told us that the airport operator shares some responsibilities with airlines but maintains control over all ramp areas. By contrast, officials at Los Angeles International Airport told us that in terminals leased by individual airlines, the airline controls the ramp area, while the airport operator controls the ramp areas in terminals where multiple airlines operate. Officials from the Air Line Pilots Association told us that ramp areas are the “scariest part of airports.” One official gave an example of inconsistencies between airports that can cause confusion and risk, such as some airport ramp areas being marked with painted lines while others are not. Federal internal control standards state that data should be appropriate, current, complete, and accurate. In addition, FAA’s own SMS calls for FAA to use a data-driven approach to analyze safety risks so that it can control that risk. As part of those efforts, FAA began the rulemaking process in 2010 to require airports to implement SMS, through which airports would analyze risks in runways, taxiways, and ramp areas, but as of August 2019 this rule had not been finalized. Although some airport officials we interviewed said they are voluntarily implementing SMS and could be collecting data on ramp area incidents, FAA—with its role in overseeing safety at all commercial airports—is better positioned to take steps to analyze ramp incidents across all U.S. airports. For example, an individual airport implementing SMS would analyze ramp area incidents at that airport, but FAA could analyze ramp area incidents and identify trends across hundreds of airports as it does for other terminal area incidents described above. Beginning to analyze ramp area incidents, for example in its new metric, would provide FAA with information necessary to mitigate ramp area incidents and ensure that it is directing its efforts to the riskiest parts of the terminal area. FAA and Others Have Implemented Multiple Efforts to Address Terminal Area Safety, but FAA Has Not Assessed the Effectiveness of Many of Its Efforts FAA, Airports, and Airlines Have Implemented Multiple Efforts to Improve Terminal Area Safety FAA, airports, and airlines have implemented multiple efforts, including technologies, to improve runway, taxiway, and ramp safety; FAA’s efforts, which are coordinated by the Runway Safety Group, focus primarily on runway and taxiway safety. Runway Safety-Related Programs FAA’s primary runway and taxiway safety effort is the Runway Safety Program, whereby staff develop national and regional runway safety plans, analyze data on runway and taxiway incidents, and help local air traffic control managers organize annual Runway Safety Action Team (RSAT) meetings at which FAA, airport operator, and other stakeholders at each airport discuss recent runway and taxiway incidents. Prior to each RSAT, FAA’s Regional Runway Safety Program Managers we met with told us they compile and share available information on each incident that occurred in the last year at the airport with the local air traffic manager. This information may include trends in incursions, the location of each incident on an airport map, and results from vehicle/pedestrian deviation investigations conducted by the FAA Office of Airports. Each air traffic manager then presents this information to attendees, who may include staff from FAA’s Office of Airports or Flights Standards, the airport operator, and local pilots. Participants discuss the prior year’s incidents, identify risks, and develop a plan to mitigate these risks. For example, attendees at an RSAT in Phoenix, Arizona, discussed risk factors that could be contributing to pilot deviations, and identified that pilots could be missing taxiway markings that instruct pilots to stop before proceeding onto a runway. Consequently, these RSAT attendees developed a plan to add lights to the surrounding area to improve visibility. The attendees also tasked air traffic managers with developing a program to provide annual tours of the tower and airfield to local pilots and personnel working on the airfield to show both parties what the other sees during flight operations. Another important FAA effort is the Runway Incursion Mitigation (RIM) Program established by the Office of Airports in 2015 to identify strategies to mitigate areas of airport runways or taxiways that do not meet current FAA airport design standards and have high incursions rates (“RIM locations”). There can be multiple RIM locations at a single airport. FAA considers locations for inclusion in the RIM inventory based on whether the location has a non-standard design and has experienced three or more incursions in a given calendar year, or averaged at least one incursion per year over the course of the RIM program. At RIM locations, FAA provides funding and technical assistance to airports to mitigate the risk of incursions, such as by changing airport design and by improving runway and taxiway signage. For example, the airport may reconfigure a taxiway to intersect a runway at a 90-degree angle (the FAA standard), or install “hold position” signs at intersections between two runways. According to FAA, at the end of fiscal year 2018, FAA had helped airports mitigate 33 RIM locations through the program, leaving 135 locations across 79 airports that still needed to be mitigated. FAA also collaborates with industry stakeholders to identify and address runway and taxiway safety issues. For example, FAA serves as Co-Chair of CAST, which analyzes data across airports to identify root causes of incidents and develop and track mitigations to address those causes. For instance, through CAST, FAA and industry stakeholders developed training for air traffic controllers to mitigate the risk of runway excursions. The training described factors that can contribute to runway excursions such as adverse winds, wet or contaminated runways, or unstable aircraft approaches. In addition, in 2015, FAA convened a forum of aviation stakeholders representing government, industry, and labor called the Runway Safety Call to Action which developed 22 short-, medium-, and long-range mitigations to address the rising number of reported runway incursions. In 2018, the DOT Office of Inspector General reviewed FAA’s progress in implementing these 22 mitigations and made three recommendations to address implementation challenges it identified, including consolidating duplicate mitigations and, as mentioned below, developing a plan to measure their effectiveness. As of August 2019, FAA had not implemented these recommendations. Individual airport operators and airlines have implemented their own efforts to improve runway, taxiway, and ramp safety. For example, officials who manage Daniel K. Inouye International Airport in Honolulu, Hawaii, told us that they changed the location of markings in an airport area known to be confusing to some pilots, which reduced incursions at this location. In addition, officials from Airlines for America and the Regional Airlines Association told us airlines host safety meetings where they leverage their collective data to identify and address industry-wide safety trends. Officials told us that one of the working groups at these airline safety meetings specifically discusses issues and solutions pertaining to the ramp area. Technologies FAA, airports, and airlines fund multiple technologies to improve runway and taxiway safety, primarily through increasing air traffic controller, pilot, and vehicle operator awareness of their surroundings. See Table 1 for technologies in place or in development. FAA surveillance technologies are multi-million dollar programs designed to help air traffic controllers identify aircraft and vehicles in the terminal area. For example, at the 35 airports where ASDE-X has been installed since 2011, FAA estimated the total program cost to FAA to be more than $800 million. In-aircraft technologies like those mentioned above help pilots identify their location on runways and taxiways, and could mitigate risks of injuries and damage caused by excursions. FAA Has Not Assessed the Effectiveness of Many of Its Terminal-Area Safety Efforts FAA has taken steps to improve terminal area safety, but has not assessed the effectiveness of many of its runway and taxiway safety efforts. For example, FAA has not evaluated how its primary efforts such as ASDE-X, ASSC, or the Runway Safety Program contribute to runway and taxiway safety, despite having implemented these efforts years ago. In some instances, FAA has taken steps to evaluate its terminal-area safety efforts. For example, FAA tracks the Runway Incursion Mitigation Program’s outcomes and the number of runway excursions safely stopped by an Engineered Material Arresting System (EMAS). FAA also contracted with a research organization in 2017 to evaluate the effectiveness of Runway Status Lights on the runway incursion rate at 15 airports. Further, the Runway Safety Program manager described other instances in which local airport officials have taken steps to evaluate the effect of mitigations at those airports. For example, one of FAA’s runway safety offices assessed the effect of five informational videos it produced, to highlight issues identified at specific airports, on runway incursions at those locations after the videos were released. However, FAA has not assessed the effectiveness of many of its numerous other runway and taxiway efforts described above and FAA officials told us that FAA does not have a plan to do so. Officials told us that they believe that the assessments described above are sufficient, based on the availability of agency resources. In June 2018, the DOT IG reported a similar finding related to its assessment of FAA’s 2015 Runway Safety Call to Action, described above. The DOT IG reported that FAA had a plan to track the completion of mitigations aimed at improving runway and taxiway safety, but not to link the mitigations to quantifiable goals or metrics that would measure their effectiveness in reducing runway incursions. FAA’s guidance on the Runway Safety Program states that FAA may evaluate the effectiveness of its runway safety programs, and the extent to which they are helping FAA meet its safety goals. In addition, in the 2016 Evaluation Roadmap for a More Effective Government, the American Evaluation Association stated that agencies should consistently use program evaluation and systematic analysis to improve program design, implementation, and effectiveness and to assess what works, what does not work, and why. Evaluating a program’s effectiveness can include methods such as surveying a program’s managers (e.g., regional runway safety program managers), or comparing a program’s performance to an evaluative criterion (e.g., a measure of terminal area safety). Without assessing the effectiveness of its range of efforts, FAA cannot determine the extent to which each of its efforts contribute to its goal of improving runway and taxiway safety, or whether other actions are needed. As discussed previously, FAA has efforts designed to increase runway and taxiway safety that range from periodic stakeholder meetings to multi-million dollar ground surveillance systems. By assessing the effectiveness of its primary efforts, FAA may be better positioned to make decisions about how to target its limited resources within and among these efforts. FAA May Be Missing Opportunities to Improve Its Terminal-Area Safety Efforts We also found that FAA may be missing opportunities to improve its terminal-area safety efforts, including improving communication within FAA. Specifically, FAA Regional Runway Safety Program staff told us that they do not receive the results of most runway incursion investigations— information that could aid RSAT discussions about preventing these incidents in the future. Four of FAA’s five Regional Runway Safety Program Managers we interviewed reported that, they did not receive the results of investigations of pilot deviations—which constitute the majority of runway incursions—from the Office of Flight Standards. As part of its investigations of these incursions, Flight Standards identifies possible causes and implements mitigations, such as additional pilot training. However, FAA does not require Flight Standards to automatically provide their investigations of runway and taxiway incidents to the Runway Safety Group, which could enhance runway and taxiway safety. FAA officials said that FAA requires Flight Standards to make its investigations available to Runway Safety Group staff, if requested, but acknowledged that this does not always result in Runway Safety Group staff receiving these investigations in a timely manner. FAA officials said they are in the process of implementing additional processes to improve communication between Flight Standards and the Runway Safety Group, but documentation on these processes FAA provided to us did not address getting investigations to Runway Safety program staff in a timely manner. Without this information, the Regional Runway Safety Program Managers may be unable to provide air traffic managers with relevant information on most incursion investigations as they prepare to host their annual RSAT meetings. The manager of the Runway Safety Group told us that Regional Runway Safety Program Managers may request individual investigations from regional Flight Standards officials, but that it would be time consuming for these regional managers to make such requests for every pilot deviation. One of FAA’s objectives is to improve runway and taxiway safety, and federal internal control standards state that management should internally communicate the information necessary to help meet its objectives. Without timely access to the results of Flight Standards’ incident investigations, Regional Runway Safety Program Managers—and therefore, local air traffic control managers—may not have all of the relevant information they need to develop appropriate runway and taxiway safety mitigation strategies and plans. Selected airport operators we interviewed also reported that they may not have all information they need to develop appropriate terminal area safety mitigation strategies. Specifically, most of those we interviewed reported that air traffic control managers did not provide them with complete and timely information on all runway and taxiway incidents. Six of 10 airport operators we interviewed told us that air traffic control managers did not notify them of all runway and taxiway incidents as they happened. Further, some airport operators told us that they were not aware of all incidents until the annual RSAT meeting. For example, the operator of one airport told us that the air traffic manager notifies the airport of vehicle/pedestrian deviations immediately, but not of operational incidents or pilot deviations. The Manager of the Runway Safety Program also confirmed that communication varies by airport operator and air traffic manager. According to federal internal control standards, management should communicate quality information externally so that external parties can help the entity achieve its objectives and address related risks. Further, according to air traffic control procedures, controllers are required to report as soon as possible to airport managers and others “any information which may have an adverse effect on air safety.” However, this requirement does not specify the types of terminal area safety incidents to which this applies. Also, through a 2018 internal risk management process, FAA identified the need for enhanced communication among airport management, the FAA Air Traffic Organization, and pilots at towered airport facilities, in order to mitigate the safety risks associated with runway incursions. Lacking complete information on runway and taxiway incidents at their airports could hamper airport operators’ ability to develop appropriate safety strategies or make investment decisions related to safety in a timely manner. For example, the operator of one airport told us that not being notified of operational incidents means the airport does not have a complete picture of the safety incidents there, which limits their ability to identify trends or training needs. Conclusions FAA’s safety oversight approach is designed to use data to identify hazards, manage risks, and mitigate them before an accident occurs. FAA uses data to analyze runway incursions, and recently developed a new metric to track the risk of terminal-area incidents. However, without leveraging data to analyze all terminal-area incidents, FAA may be missing opportunities to better target the agency’s resources, and ultimately to further improve safety. For example, because FAA does not have a process to eliminate all duplicates from its excursion data, it does not have assurance that its excursion data are accurate, and it may be missing opportunities to mitigate the risks excursions pose. Similarly, taking steps to analyze ramp area incidents by identifying such incidents in its new metric would help FAA determine whether it needs to focus more on improving safety in ramp areas. In addition, establishing a plan to evaluate all of its runway and taxiway safety efforts would help FAA direct its resources toward activities and technologies proven to enhance safety and identify ways to strengthen those efforts. Moreover, improving internal communication among FAA offices could make the annual Runway Safety Action Team meetings—a key component of FAA’s terminal area safety efforts—more effective. And last, improving external communication between air traffic managers and airport operators would help airports identify and implement needed mitigations more quickly. Recommendations for Executive Action We are making the following five recommendations to FAA: 1. The Runway Safety Manager should develop a process to identify and remove duplicate excursion records. (Recommendation 1) 2. The Runway Safety Manager should take steps to analyze data on ramp area incidents in FAA’s new surface safety metric. (Recommendation 2) 3. The Runway Safety Manager should establish a plan to assess the effectiveness of all of FAA’s terminal area-safety efforts, including Airport Surface Detection Equipment, Model X (ASDE-X) and the Runway Safety Program. (Recommendation 3) 4. The Administrator of FAA should require Flight Standards to share the results of its investigations with the Runway Safety Group, in a timely manner. (Recommendation 4) 5. The Administrator of FAA should require air traffic control managers to share information on terminal area incidents, such as operational incidents and pilot deviations, with airport operators, in a timely manner. (Recommendation 5) Agency Comments and Our Evaluation We provided the Department of Transportation (DOT), the Department of Labor (DOL), the National Aeronautics and Space Administration (NASA), and the National Transportation Safety Board (NTSB), with a draft of this report for review and comment. In its written comments reproduced in appendix I, DOT concurred with our recommendations. DOL, NASA, and NTSB did not provide technical comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 11 days from the report date. At that time, we will send copies to the appropriate congressional committees, DOT, DOL, NASA, NTSB, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Transportation Appendix II: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgments In addition to the individual named above, other key contributors to this report were Heather MacLeod (Assistant Director); Sarah Farkas (Analyst-in-Charge); Dave Hooper; Josh Ormond; Madhav Panwar; Steven Rabinowitz; Laurel Voloder; Madeline Welter; and Elizabeth Wood.
Why GAO Did This Study U.S airspace system is one of the safest in the world, but incidents and near misses at and around U.S. terminal areas still occur. FAA oversees the safety of runways and taxiways and works with industry partners—including airlines, airports, pilots, and others—to improve safety in these areas. Despite FAA's continued efforts, the number of reported terminal area incidents has increased over time. GAO was asked to review various issues related to runway safety and to update its prior work on airport terminal areas. This report examines: (1) the extent to which FAA uses data to analyze terminal area incidents and (2) efforts FAA and others have implemented to improve terminal area safety, and how FAA assesses their effectiveness. GAO analyzed FAA data; interviewed officials from 10 airports selected based on high runway incident rates in the past 3 years, among other factors; and interviewed federal and industry officials. What GAO Found The Federal Aviation Administration (FAA) uses data to analyze some types of incidents in airport “terminal areas”—runways, taxiways, and ramps. For example, FAA uses data to analyze runway “incursions”—the incorrect presence of an aircraft, vehicle, or person on the runway. According to FAA data, the rate of reported runway incursions nearly doubled from fiscal years 2011 through 2018, with most of this increase due to a rise in reports of less severe incursions, or those without immediate safety consequences. However, GAO found that FAA has not identified or removed all duplicates from its data on runway “excursions”—when an aircraft veers off or overruns a runway—which limits FAA's ability to accurately analyze these incidents. Additionally, FAA does not use data to analyze incidents that occur in ramp areas—the parts of terminal areas where aircraft are prepared for departure and arrival—where injuries to workers and damage to aircraft can occur. Without a process to leverage accurate excursion and ramp incident data, FAA may not be able to assess the risk these incidents pose to passengers, airport staff, and others. FAA, airports, and airlines have implemented multiple efforts to improve terminal area safety, but FAA has not assessed the effectiveness of many of its efforts. For example, FAA has funded multiple technologies to improve runway safety, such as Airport Surface Detection Equipment, Model X (ASDE-X)—a ground surveillance system that enables air traffic controllers to track landing and departing aircraft and alerts controllers of potential collisions. However, FAA has not assessed the effectiveness of ASDE-X. Similarly, FAA has not assessed the effectiveness of its Runway Safety Program, whereby FAA staff, along with local airport stakeholders, provide data and support to local air traffic managers to help identify and manage terminal area safety incidents. FAA has taken steps to evaluate some of its terminal-area safety efforts, such as tracking the number of runway excursions safely stopped by a lightweight, crushable concrete designed to stop or greatly slow an aircraft that overruns the runway. However, without assessing how all of FAA's efforts contribute to its goal of improving runway and taxiway safety, FAA cannot determine the extent to which it is targeting its limited resources to the most effective strategies. What GAO Recommends GAO is making five recommendations including that FAA identify and remove duplicate excursion data, develop processes to analyze ramp area incidents, and establish a plan to assess the effectiveness of its terminal area safety efforts. FAA concurred with the recommendations.
gao_GAO-20-293
gao_GAO-20-293_0
Background NNSA is responsible for managing national nuclear security missions: ensuring a safe, secure, and reliable nuclear deterrent; supplying nuclear fuel to the Navy; and supporting the nation’s nuclear nonproliferation efforts. NNSA largely relies on management and operating contractors to carry out these missions and to manage the day-to-day operations at eight sites collectively known as NNSA’s nuclear security enterprise. The Y-12 National Security Complex in Tennessee is the primary site among these with enriched uranium capabilities. Y-12’s primary mission is processing and storing uranium, processing uranium for naval reactors for the Navy, and developing associated technologies, including technologies to produce uranium-related components for nuclear warheads and bombs. According to NNSA documents, Y-12’s enriched uranium operations have key shortcomings, including an inefficient workflow, continually rising operations and maintenance costs stemming from facility age, and hazardous processes that could expose workers to radiological contamination. To address these shortcomings, NNSA developed plans to replace aging infrastructure at Y-12 and relocate key processing equipment without jeopardizing uranium production operations. History of UPF Project In 2004, NNSA initially proposed relocating Y-12’s main uranium processing equipment into a new facility referred to as the UPF. NNSA planned to construct this single, consolidated facility that would reduce the overall size of existing uranium processing facilities, reduce operating costs by using modern equipment, and increase worker and environmental health and safety. NNSA estimated in 2007 that the UPF would cost approximately $1.4 billion to $3.5 billion to design and construct. In June 2012, the Deputy Secretary of Energy approved an updated cost estimate range for the UPF of $4.2 billion to $6.5 billion, with the latter being the project’s maximum allowable cost. However, by August 2012, the UPF contractor concluded that the UPF as designed would not provide enough space to house all of the uranium processing and other equipment. In October 2013, an external review estimated that the UPF project could cost as much as $11 billion. In 2014, because of the high cost and scheduling concerns of a solution focused solely on constructing new buildings, NNSA established its uranium program within its Office of Defense Programs. NNSA also prepared a high-level strategic plan based on its objectives of 1) completing the UPF project with a reduced scope within the cost and schedule limits established for the original UPF project and 2) phasing out mission dependency on Building 9212. Under NNSA’s revised approach, the agency plans to transition production operations out of Building 9212 and into the re-scoped UPF or existing buildings at Y-12 after they have been upgraded as described in further detail below. Building 9212. Constructed in 1945, the building’s design predates modern nuclear safety codes. It consists of a number of interconnected buildings that contain capabilities for uranium purification and casting, among other things. One of NNSA’s key goals is to shut down the Building 9212 operations that have the highest nuclear safety risks. Because of these risks, NNSA is implementing a four-phase exit strategy to systematically phase out mission dependency on Building 9212. According to NNSA’s September 2018 implementation plan for the exit strategy, the first three phases focus on reducing inventory, system isolation and clean out, and relocating capabilities from Building 9212 to other existing Y- 12 facilities or to the UPF once startup is complete. Building 9212 will then enter a phase of post-operational clean out, during which operations will be limited to simple processing, recovery, and inventory accountability. By about 2035, management of the building will transition to DOE’s Office of Environmental Management for decontamination and decommissioning activities. Building 9215. Constructed in the 1950s, the building’s design predates modern nuclear safety codes. It consists of three main structures, and its current primary function is fabrication, which involves metal machining operations for enriched uranium. As part of the Building 9212 exit strategy, NNSA plans to move capabilities into Building 9215, such as the uranium purification and the processing of uranium metal scraps resulting from machining operations. The uranium program is managing the development and deployment of new technologies to increase the efficiency and effectiveness of these capabilities. NNSA initially intended to house these two capabilities in the UPF before re-scoping the project to meet its cost and schedule goals. According to NNSA documents, NNSA is identifying and prioritizing infrastructure investments for Building 9215 that are to ensure its reliability through the 2040s. Building 9995. Constructed in the mid-1950s, this building’s design predates modern nuclear safety codes. It consists of a laboratory with capabilities for analytical chemistry operations, which can sample enriched uranium for material assay, chemistry content, and metallography in support of production. NNSA initially intended to house the analytical chemistry capabilities to support enriched uranium processing and material characterization in the UPF before re-scoping the project to meet its cost and schedule goals. According to NNSA documents, NNSA is identifying and prioritizing infrastructure investments for Building 9995 that are to ensure its reliability through the 2040s and its continued analytical chemistry support for the UPF and Y-12 more broadly. Building 9204-2E. Constructed in the late 1960s, this building’s design predates modern nuclear safety codes. It consists of a three- story, reinforced concrete frame structure that includes capabilities for assembly and disassembly of enriched uranium components with other materials. According to NNSA officials, the agency installed its radiography capability in Building 9204-2E in April 2017. According to NNSA documents, NNSA is identifying and prioritizing infrastructure investments for Building 9204-2E that are to ensure its reliability through the 2040s. Highly Enriched Uranium Materials Facility (HEUMF) (also called Building 9720-82). Beginning operations in January 2010, this building was built to modern nuclear safety codes. It is a reinforced concrete and steel structure that provides long-term storage of enriched uranium materials and accepts the transfer of some legacy enriched uranium from older facilities. HEUMF is the central repository for highly enriched uranium. Figure 1 shows NNSA’s planned relocation of uranium processing capabilities out of Building 9212 and into the re-scoped UPF and existing Y-12 facilities. The figure also indicates which existing facilities will require infrastructure investments to support enriched uranium operations. Under the new approach, the re-scoped UPF will be smaller than the UPF project’s original design and will house capabilities for casting, oxide production, and salvage and accountability of enriched uranium. NNSA has stated that the re-scoped UPF is to be built for no more than $6.5 billion by the end of 2025 through seven subprojects, described below. Site Readiness. This subproject included work to relocate an existing road, construct a new bridge, and extend an existing haul road. Site Infrastructure and Services. This subproject included demolition, excavation, and construction of a parking lot, security portal, concrete batch plant, and support building. Substation. This subproject included construction of an electrical power substation to provide power to the UPF and Y-12, replacing an existing substation at Y-12. Process Support Facilities. This subproject includes work to provide chilled water and storage of chemical and gas supplies for the UPF. Salvage and Accountability Building. This subproject includes construction of a nuclear facility for the decontamination of wastes and recovery of chemicals associated with uranium processing. Main Process Building. This subproject includes construction of the main nuclear facility to contain casting and special oxide production capabilities and a secure connecting portal to the HEUMF. Mechanical Electrical Building. This subproject includes construction of a building to house mechanical, electrical, heating, ventilation, air conditioning, and utility equipment for the Salvage and Accountability Building and Main Process Building. Requirements and Best Practices for Project Management and Technology Readiness Assessments NNSA is required to manage construction of capital asset projects with a total project cost of greater than $50 million, such as the UPF, in accordance with DOE Order 413.3B. NNSA’s Office of Acquisition and Project Management manages the UPF project under DOE Order 413.3B with funding from NNSA’s Office of Defense Programs through the uranium program. DOE Order 413.3B requires that the project go through five management reviews and approvals, called “critical decisions” (CD), as the project moves from planning and design to construction and operation. (See fig. 2.) DOE Order 413.3B also requires that, before project completion (CD-4), NNSA issue a transition-to- operations plan, which is to ensure efficient and effective management as a project becomes operational and provide a basis for attaining initial and full operational capability. For projects likely to have an extended period of transition to the start of operations, an August 2016 memorandum from DOE requires that NNSA develop a more detailed plan to attain full operational capability. The plan must be developed earlier in the project management process— before start of construction (CD-3). In addition, NNSA must provide quarterly updates to DOE’s Project Management Risk Committee after completing construction until full operational capability is attained. The memorandum notes that DOE’s complex nuclear facilities can have significant risks that continue after project completion. These ongoing risks may impact achievement of full operational capability and thus require more efficient management. In September 2019, we reported that DOE officials stated that the August 2016 memorandum was largely created in response to experience with the Integrated Waste Treatment Unit facility at Idaho National Laboratory. This facility, which is intended to treat two forms of nuclear waste, is not operating as expected approximately 7 years after the completion of its construction. DOE Order 413.3B also states that projects with a total estimated cost of more than $100 million should have an independent cost estimate and external independent review prior to approval of the project’s performance baselines for cost and schedule (CD-2). Further, appropriations acts since fiscal year 2012 have included a limitation that prohibits the use of funds to approve CD-2 (approval of the project’s performance baselines for cost and schedule) or CD-3 (approval to start construction) for capital asset projects where total project costs exceed $100 million until a separate independent cost estimate has been developed. According to DOE’s standard operating procedure for conducting independent cost estimates, an independent cost estimate is prepared by an organization independent of the project sponsor—DOE-PM, in this case—using the same detailed technical and procurement information that was used to make the initial project estimate. The purpose of the estimate is to validate the project’s performance baselines—which include cost and schedule estimates—to determine these estimates’ accuracy and reasonableness. DOE-PM may use the independent cost estimate as supporting information in developing the external independent review. The external independent review is a broader analysis of the project to provide an unbiased assessment of whether NNSA can execute the project within the proposed scope, schedule, and cost commitments while meeting key performance requirements and fulfilling the mission need. Many of the federal government’s more costly and complex capital asset projects, including the UPF, require the development of cutting-edge technologies and integration of those technologies into large and complex systems. For example, DOE and NNSA use a systematic approach for assessing how far a technology has matured to evaluate the technology’s readiness to be integrated into a system—Technology Readiness Levels (TRL). This approach is intended to ensure that new technologies are sufficiently mature in time to be used successfully when a project is completed. TRLs progress from the least mature level, in which the basic technology principles are observed (TRL-1), to the highest maturity level, in which the total system is used successfully in project operations (TRL- 9). DOE Order 413.3B requires that each critical technology item or system on which a project depends must be demonstrated as a prototype in an operational environment (TRL-7) before the project’s performance baselines are approved (CD-2). According to our guide on evaluating technology readiness, assessing technology readiness does not eliminate the risk of relying on new technology but can identify concerns and serve as the basis for realistic discussions on how to mitigate potential risks associated with the project’s scope, for example. Requirements and Best Practices for Program Management According to the Project Management Institute, Inc. (PMI), effective program management, in addition to effective project management, is important to the success of efforts such as NNSA’s uranium program. According to PMI’s standard for program management, effective program management helps ensure that a group of related projects and program activities are managed in a coordinated way to obtain benefits not available from managing them individually. Program management involves aligning multiple components to achieve the program’s goals. Other general standards relevant to program management for the uranium program include our cost-estimating guide and schedule assessment guide. In March 2009, we issued our cost-estimating guide to provide a consistent methodology that is based on cost-estimating best practices and that can be used across the federal government for developing, managing, and evaluating program cost estimates. The methodology outlined in the guide is a compilation of best practices that federal cost-estimating organizations and industry use to develop and maintain reliable cost estimates throughout the life of a government acquisition program. According to the guide, developing accurate life- cycle cost estimates has become a high priority for agencies in properly managing their portfolios of capital assets and in decision-making throughout the process. A life-cycle cost estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. The guide also states that a reliable cost estimate reflects all costs associated with a program—meaning that the estimate must be based on a complete scope of work—and the estimate should be updated to reflect changes in requirements (which may affect the scope of work). In December 2015, we issued our schedule guide, which develops the scheduling concepts introduced in our cost-estimating guide and presents them as best practices associated with developing and maintaining a reliable, high-quality schedule. According to the schedule guide, a well- planned schedule is a fundamental management tool that can help government programs use funds effectively by specifying when work will be performed and by measuring program performance against an approved plan. An integrated master schedule integrates all of the planned work in the program, the resources necessary to accomplish that work, and the associated budget, and it should be the focal point for program management. This schedule can show, for example, the completion dates for all activities leading up to major events or milestones, which can help determine if the program’s parameters are realistic and achievable. An integrated master schedule may consist of several or several hundred individual project or other activity schedules that represent the various efforts within a program. It should include the entire known scope of work, including the effort necessary from all government, contractor, and other key parties for a program’s successful execution. In addition, NNSA has various program management policies and guidance that apply to uranium program efforts that are not capital asset projects and that fall outside of DOE Order 413.3B. For example: NNSA issued a program management policy in January 2017 that defines general roles and responsibilities for the program managers for all of its strategic materials, such as uranium. This policy broadly outlines the managers’ authority and responsibilities for managing the strategic materials; these responsibilities include developing program documentation and managing risk. NNSA issued a program management policy in February 2019 that states program managers should establish and document the requirements for scope, schedule, and cost management using a tailored approach to their program. These requirements include the development of schedule and cost estimates that cover the life cycle of a program where appropriate, among other things. NNSA’s program guidance—applicable to the uranium program and others that fall under the Office of Defense Programs—recommends the development of an integrated master schedule and states that having one supports effective management of a program’s scope, risk, and day-to-day activities. Specifically, the guidance states that during the initial phases of a program, an integrated master schedule provides an early understanding of the required scope of work, key events, accomplishment criteria, and the likely program structure by depicting the progression of work through the remaining phases. The guidance allows for tailoring of the agency’s management approach based on the particular program being managed. NNSA Reports That the UPF Project Is on Schedule and within Budget and Likely to Start Operations in 2026 According to NNSA documents and officials, the UPF project is on schedule and within budget, and NNSA has developed a plan to receive start-up authorization for UPF operations in 2025 and attain full operational capability in 2026. NNSA Reports That the UPF Project Is Currently on Schedule and within Budget NNSA documents and officials reported that the UPF project is on track to meet its cost and schedule baseline estimates, and thus is expected to be constructed for $6.5 billion by the end of 2025. According to DOE’s project report and NNSA officials, three of the seven UPF subprojects are complete and four are ongoing as of December 2019. When we last reported in September 2017, NNSA had completed the Site Readiness subproject. In February 2018, NNSA completed the Site Infrastructure and Services subproject—about 2 months early and about $18 million under budget. In December 2019, NNSA completed the Substation subproject—about 6 months early and $13 million under budget. As shown in table 1, by March 2018 all UPF subprojects’ formal scopes of work and cost and schedule baseline estimates were approved (CD-2), and NNSA gained approval to start construction on them (CD-3). Since establishing these cost and schedule baseline estimates, NNSA officials stated that they have not made any significant changes that would require DOE executive-level approval. According to DOE policy, changes that affect the project’s ability to satisfy the mission need or that increase costs by the lesser of $100 million or half the project costs must be approved by the DOE Deputy Secretary as DOE’s Chief Executive for Project Management. According to DOE’s project report and NNSA officials, the four ongoing subprojects were progressing on schedule and within budget as of December 2019. NNSA officials stated that they expect these subprojects to meet their respective cost and schedule performance baselines and that the overall UPF project will be constructed for $6.5 billion by the end of 2025. (See fig. 3 for photograph of Main Process Building and Salvage and Accountability Building’s construction progress as of September 2019.) NNSA Plans to Start UPF Operations in 2025 and Reach Full Operational Capability in 2026 NNSA and its contractor for Y-12 have developed a plan to receive start- up authorization for UPF operations in 2025 and then will likely attain full operational capability for the UPF in 2026, according to NNSA officials and contractor representatives. DOE and NNSA approved this plan, which is required by DOE policy, in February 2018. This plan outlines three major risks associated with the UPF project that NNSA will need to address so that the project can attain full operational capability: 1. Capabilities and systems integration within the UPF. Addressing this risk includes actions to ensure that all of the UPF’s systems, and the capabilities that those systems provide (e.g., casting, oxide production), can function together as designed through testing. 2. Process prove-in and design authority qualification. Addressing this risk includes actions to ensure that the UPF’s systems meet certain metrics and are qualified for mission work. Aspects of this include laboratory analysis, statistical validation of repeatability, and engineering evaluations. 3. Integration of UPF with other facilities. Addressing this risk includes actions to ensure that the UPF systems can interface with other facilities’ systems (e.g., those in Buildings 9215, 9204-2E, and 9995) as designed and that all systems are able to support full-scale operations. NNSA officials estimated that construction of the UPF will be completed in 2022. According to the plan, the UPF will then go through various preoperational testing and operational readiness reviews to demonstrate the capabilities using nonhazardous surrogate material. Following testing and readiness reviews, the UPF will gain startup authorization, go through additional testing and first use, and then attain full operational capability— also referred to as “operational release.” NNSA officials and contractor representatives stated in June 2019 that the UPF should receive startup authorization sometime in 2025, before the project’s estimated completion (CD-4) date of December 2025. These officials and representatives estimated that the UPF would attain full operational capability about a year from receiving that startup authorization—that is, sometime in 2026. (See fig. 4.) NNSA officials stated in October 2019 that in fiscal year 2020 they will update the plan to attain full operational capability to include a schedule with more specific time frames for startup authorization, hot functional testing, first use, and operational release, among other things. According to NNSA’s plan, attaining full operational capability for the UPF is the final step that will ultimately lead to and enable the cessation of uranium operations in Building 9212, which could then be turned over to DOE Office of Environmental Management for final disposition in 2035. NNSA Obtained Independent Cost Estimates as Required and Used Them to Inform Contractor Negotiations and Baseline Estimates NNSA followed requirements to obtain independent cost estimates for the UPF (i.e., the four largest UPF subprojects) whose total estimated costs exceeded $100 million. NNSA then used those estimates to help negotiate with contractors and inform baseline estimates. NNSA Had UPF Cost and Schedule Baseline Estimates Validated through Reconciled Independent Cost Estimates for the Four Largest Subprojects NNSA obtained independent cost estimates from DOE-PM for the four UPF subprojects for which total costs exceeded $100 million. As noted above, projects with total costs that exceed $100 million are subject to an appropriations limitation unless independent cost estimates are obtained, and DOE policy requires such estimates for such projects. DOE-PM, an office independent from NNSA and its management of the UPF project, conducted the independent cost estimates for the four larger subprojects: the Mechanical Electrical Building, Process Support Facilities, Salvage and Accountability Building, and Main Process Building subprojects. In addition, NNSA officials stated that they obtained independent reviews for the three subprojects for which costs did not exceed $100 million. DOE policy does not require independent cost estimates for projects whose total estimated costs are less than the $100 million threshold. However, a NNSA policy states that NNSA should obtain an independent cost estimate or independent cost review to validate a project’s cost baselines for those projects for which estimated costs are between $20 million and $100 million. NNSA organized the independent cost estimates for the four larger subprojects so that some of the independent cost estimates included work for more than one subproject. Specifically, DOE-PM completed two estimates—one in March 2016 and one in December 2016—that included site preparation work and long lead procurements for the Salvage and Accountability Building and Main Process Building subprojects. In November 2016, DOE-PM completed the independent cost estimate for the Mechanical Electrical Building, which was the only estimate to include a single UPF subproject. NNSA officials explained that they handled the estimate for this subproject differently because work for the Mechanical Electrical Building could be separated easily from the other subprojects, and it was largely designed as a commercial-grade building. Lastly, in November 2017, DOE-PM completed the independent cost estimate for the majority of the work for the Process Support Facilities, Salvage and Accountability Building, and Main Process Building subprojects. NNSA officials stated they organized the independent cost estimates in this way to meet DOE requirements and appropriations limitations but still be able to begin work on the aspects of the overall UPF project that need to be completed earliest. DOE-PM conducted the four UPF subprojects’ independent cost and schedule estimates using our cost estimating and scheduling best practices, according to DOE-PM’s independent cost estimate reports. DOE-PM reviewed the project’s key cost drivers—elements whose sensitivity significantly affects the total project cost. The DOE-PM team then established independent estimates for those cost drivers, which may include vendor quotes for major equipment and detailed estimates for other materials, labor, and subcontracts. The team also prepared an independently generated resource-loaded schedule that allowed them to check for adequate funding compared with the project’s funding profile developed by the project team. DOE-PM’s analyses are based on their review of the UPF project’s work breakdown structure and associated documents, which include all of the activities that make up the project’s scope. DOE-PM also compared the UPF project estimates with our cost estimating and scheduling best practices, according to DOE-PM’s independent cost estimate reports. For example, DOE-PM’s November 2017 report found that the three larger UPF subproject’s cost and schedule estimates partially met the best practices and recommended some changes to the contractor to address those estimates that did not. DOE-PM reconciled the results of its independent cost estimates with the initial project estimates, as required by DOE’s standard operating procedure and NNSA’s business operating procedure for conducting independent cost estimates. During the reconciliation, DOE-PM worked with the UPF project team to adjust both the initial project estimates and its own independent cost estimates to correct any errors or misinterpretations of project requirements, according to the independent cost estimate reports. Under DOE’s and NNSA’s independent cost estimate procedures and according to DOE-PM officials, any remaining differences should be identified and explained, but estimates should not be changed. DOE-PM drew from the independent cost estimates for the Mechanical Electrical Building subproject to complete an external independent review of that subproject in November 2016. Then, DOE-PM drew from the independent cost estimates that included work for the Main Process Building, Salvage and Accountability Building, and Process Support Facilities subprojects to complete its external independent review of the UPF project in March 2018. NNSA Used Information from the Independent Cost Estimates and External Independent Reviews to Inform the UPF’s Cost and Schedule Baseline Estimates NNSA officials stated that they used information from DOE-PM’s independent cost estimate and external independent review reports to help negotiate remaining work with the contractor and finalize the overall UPF project’s baseline estimates before starting construction. In June 2018, NNSA prepared a strategy to guide its negotiation of the remaining UPF project work that had not yet been priced with the contractor. Based on our review of NNSA’s negotiation strategy, we found that NNSA used DOE-PM’s independent cost estimate and external independent review reports to negotiate at least 14 of the 22 major and minor issues identified for discussion. These 14 issues included, for example, reducing concrete and freight direct costs, reducing the margin added to cover any increase in design scope, reducing subcontractor indirect costs, and increasing accuracy of other cost and schedule estimates. DOE approved NNSA’s cost and schedule baseline estimates (CD-2) and start of construction (CD-3) in March 2018 for three UPF subprojects. (See table 2 for the recommended cost and schedule baselines from the external independent review report and the final cost and schedule baseline estimates for all UPF subprojects.) In five of the seven subprojects, the final cost baseline estimates were close to or below the recommended baselines from DOE-PM’s external independent review. Also, in four of the seven subprojects, the final schedule baseline estimates were close to the recommended baselines. According to NNSA officials, the UPF project final baseline cost estimate includes cost contingency, and the December 2025 final schedule baseline estimate includes a year of schedule contingency. NNSA officials stated that, if necessary, they could use available funds to expedite the schedule. NNSA officials also expressed confidence that the UPF project will meet its goal of construction for $6.5 billion by the end of 2025. NNSA Has Made Progress Implementing the Uranium Program’s Scope of Work and Recently Developed a Program Schedule and Cost Estimate Since we last reported in September 2017, NNSA identified and made progress in implementing the uranium program’s scope of work and developed an integrated master schedule and life-cycle cost estimate— key management information for the program. The uranium program’s integrated master schedule extends through fiscal year 2035, and the life- cycle cost estimate includes the $7.4 billion in program costs from fiscal years 2016 through 2026. NNSA Has Identified and Made Progress in Implementing the Uranium Program’s Scope of Work Since we last reported in September 2017, NNSA identified the uranium program’s scope of work and made progress in carrying out key activities. Specifically, NNSA identified the uranium program’s scope of work as required under NNSA program management policy and which we identified as a leading practice in our cost estimating and schedule guides. According to NNSA documents we reviewed and officials we interviewed, NNSA developed the uranium program’s scope of work in a work breakdown structure, which defines in detail the work or activities necessary to accomplish the program’s objectives. NNSA officials stated that the uranium program’s scope of work includes the UPF project as well as the capabilities and other activities necessary for the overall modernization effort that are not part of the UPF project. NNSA made progress implementing the following three main areas of the uranium program’s scope of work: Process Technology Development. Since we last reported in September 2017, NNSA’s uranium program has made progress in three of the four process technology projects that it manages to develop new uranium processing capabilities. According to NNSA officials, these capabilities are not included in the UPF project but are necessary to complete the suite of uranium capabilities required to meet weapons program needs. NNSA approved the electrorefining project’s cost and schedule performance baselines and start of construction (CD-2/3) in February 2019. This project, along with the direct chip melt projects discussed further below, are designed to provide a capability that was scoped out of the UPF project. Specifically, the electrorefining project is to provide the capability to purify uranium metal. NNSA officials stated that the calciner project will have its cost and schedule baselines and start of construction approved (CD- 2/3) in May 2020. This project is to provide the capability to convert uranium-bearing solutions to uranium oxide (a dry solid) so that it can be stored pending further processing in the future. The project will be located in Building 9212 and supports the exit of that building by enabling the processing of certain uranium- bearing solutions (such as the solutions resulting from cleaning out the building’s pipes and vessels) into a dry solid oxide that can be stored pending further processing. According to NNSA officials, the direct chip melt projects include two related efforts—a front-loading furnace and a bottom- loading furnace—that will provide the capability to process uranium scrap metal. Officials stated that the front-loading furnace direct chip melt project received approval to start work in September 2019 and has an estimated project completion of May 2021. This will provide near-term capability to process uranium scrap metal until the bottom-loading furnaces are designed and constructed. Officials said NNSA initiated the bottom-loading furnace direct chip melt project in July 2019 and expects to start construction in January 2021. Because the direct chip melt projects fall below the $50 million threshold for management under DOE Order 413.3B, they do not have CD dates. However, NNSA officials stated they will manage and oversee the bottom- loading furnace project under the Office of Defense Programs’ authorization-to-proceed memorandum and follow the sound project management principles outlined in the order. NNSA officials stated that the agency requires an oxide-to-metal conversion capability. In June 2019, NNSA issued a Notice of Intent to enter into a sole-source contract to provide the uranium oxide to metal conversion capability. According to NNSA officials, this potential sole-source contract is a near-term strategy that could cover any gap caused by phasing out operations in Building 9212. According to NNSA, under this contract the contractor could provide conversion services in 2023, effectively covering any gap caused by phasing out conversion operations in Building 9212. NNSA officials stated that the agency intends to continue pursuing the direct electrolytic reduction technology to provide the oxide-to-metal conversion capability after the sole-source contract, but the technology has not progressed since we last reported in 2017. Extended Life Programs. In December 2017, NNSA developed the implementation plan for the extended life programs for Buildings 9215 and 9204-2E. NNSA also developed an extended life program for Building 9995 in November 2017 and the implementation plan for that program in September 2018. NNSA updated both of these implementation plans in September 2019. Further, in September 2018, NNSA developed an implementation plan for its strategy to stop operations in Building 9212 and begin post-operations clean-out activities. These implementation plans identify a specific scope of work, and the necessary funding, that NNSA must execute in order to extend the operational lives of Buildings 9215, 9204-2E, and 9995 through the 2040s. Reducing Material at Risk in Older Buildings. Since we last reported in September 2017, NNSA has made progress in its efforts to move uranium materials out of older facilities and into the HEUMF. Specifically, NNSA officials said in November 2019 that they were about 77 percent done with this effort and had moved more than 50 metric tons of uranium out of older facilities and into the HEUMF since fiscal year 2015. In June 2019, NNSA officials said that their current strategy focuses on incorporating near-just-in-time inventory practices and further reducing material at risk by 2023. According to NNSA officials, this strategy is to further minimize the amount of material that is staged in Y-12’s older buildings. Also, according to NNSA officials, NNSA achieved a target working inventory of material in Building 9215 in 2016 and in Building 9204-2E in 2019. NNSA officials stated that, as of November 2019, they were on schedule to complete the remaining efforts by their estimated time frames. NNSA officials stated that the program’s scope of work includes elements for which additional analyses may be required and that any additional program work identified by those analyses will be incorporated into the scope of work, as appropriate. For example, NNSA identified the additional environmental and seismic analyses necessary to develop the scope of work for addressing certain structural deficiencies in Buildings 9215 and 9204-2E. NNSA is under a court order to complete additional environmental and seismic risk analyses following a 2014 update in the seismic hazard map for the area, which showed a greater risk than the previous version. According to Defense Nuclear Facilities Safety Board officials, in response to its 2015 report, NNSA identified their approach for re-evaluating the facilities’ conditions and risks and addressing some of the board’s seismic-related concerns. According to board officials, NNSA plans to start the re-evaluation of these structures in early fiscal year 2020. NNSA officials stated that if the additional analyses identify additional necessary work for the uranium program, NNSA will update the scope of work and revise the extended life program implementation plans to include that work. NNSA Has Developed an Integrated Master Schedule and a Life-Cycle Cost Estimate to Manage Its Uranium Program In December 2019, NNSA developed an integrated master schedule through fiscal year 2035 and a life-cycle cost estimate for the program through fiscal year 2026 that includes over $850 million in costs in addition to the UPF project. Successful management of federal acquisition programs, such as NNSA’s uranium program, partly depends on developing this key management information, as stated in our cost estimating and schedule guides. In September 2017, we found that NNSA had not yet developed an integrated master schedule or life-cycle cost estimate for the uranium program and recommended that NNSA set a time frame for doing so. NNSA agreed with this recommendation and has made progress in implementing it. A complete scope of work is required to develop an integrated master schedule and life-cycle cost estimate. (See fig. 5.) NNSA Developed an Integrated Master Schedule to Help Manage Its Uranium Program In December 2019, NNSA developed an integrated master schedule based on the uranium program’s scope of work to help manage its uranium program, as recommended in NNSA’s program guidance as well as our schedule guide and other best practices. According to PMI’s Program Management Standard, a program-integrated master schedule is the top-level planning document that includes individual program elements’ schedules and defines their dependencies among those required to achieve the program’s goals. According to NNSA officials, NNSA included all of the uranium program’s capabilities and elements that make up its scope of work, as well as other work that may affect the program, through fiscal year 2035. NNSA officials stated that the schedule includes the key milestones for each uranium program capability and element, such as project completion (CD-4) and operational release, since these key milestones are important for tracking the uranium program’s critical path of activities and for overall program management. NNSA officials stated that they will start reporting the uranium program’s progress against this integrated master schedule beginning in 2020. NNSA officials stated that they expect the integrated master schedule to be iterative and that they will update it to capture any changes or additions to the program’s scope of work. NNSA’s Life-Cycle Cost Estimate Identified Additional Costs for Uranium Program In December 2019, NNSA developed a life-cycle cost estimate through fiscal year 2026 for the uranium program, as called for in our cost estimating guide and other best practices. NNSA estimated that the uranium program will spend a total of approximately $7.4 billion from fiscal years 2016 through 2026 to support its uranium processing modernization efforts. Specifically, NNSA officials stated that the life-cycle cost estimate includes $6.5 billion in UPF project costs and over $850 million in program costs that include developing the uranium processing capabilities that are not part of the UPF project, integrating those capabilities with the UPF, improving the infrastructure of existing buildings, and transitioning out of Building 9212. NNSA officials stated that they estimated uranium program life-cycle costs from fiscal years 2016 through 2026 because they could not accurately estimate some of the activities in the program’s scope of work that are enduring for the nuclear security enterprise rather than specific projects with finite schedules for construction. According to our cost- estimating guide, a reliable cost estimate reflects all costs associated with a program’s scope of work, and the estimate should be updated to reflect any changes in requirements—that is, a life-cycle cost estimate can be iterative. NNSA officials stated that they expect to update the life-cycle cost estimate with additional program costs, once known, and will include any additional future scope added to the program. Schedule milestones and cost estimates included in NNSA’s integrated master schedule and life-cycle cost estimate for the uranium program are summarized in table 3. We are encouraged that NNSA may be able to better manage the day-to- day activities of the uranium program and mitigate any risks associated with integrating the UPF project with other aspects of the program through its development of key program management information—a scope of work, an integrated master schedule, and a life-cycle cost estimate. Successful program management through the life of a program depends in part on all of these efforts and may provide decision makers such as Congress with needed information on the program’s complete scope of work, key events, and expected long-term program costs. Agency Comments We provided DOE and NNSA with a draft of this report for review and comment. NNSA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of the National Nuclear Security Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual mentioned above, Jonathan Gill (Assistant Director), Elizabeth Luke (Analyst in Charge), Danny Baez, John Bauckman, Brian Bothwell, Juaná Collymore, Jennifer Echard, Justin Fisher, Juan Garay, William Gerard, Cynthia Norris, Dan Royer, and Kiki Theodoropoulos made key contributions to this report.
Why GAO Did This Study A supply of enriched uranium is crucial to support the nation's nuclear weapons stockpile and the U.S. Navy, but the infrastructure of several U.S. uranium-processing facilities is outdated. In 2014, NNSA began plans to meet the nation's uranium needs by redirecting processing capabilities to the UPF and to other existing buildings NNSA plans to upgrade at Y-12 in Oak Ridge, Tennessee. The National Defense Authorization Act for Fiscal Year 2013, as amended, includes a provision for GAO to periodically review the UPF. Also, a Senate report accompanying the National Defense Authorization Act bill for fiscal year 2012 provides for GAO to review the independent cost estimates for the UPF. This report, which is GAO's sixth on the UPF, examines (1) the status of the UPF project and plans for starting UPF operations; (2) the extent to which NNSA has followed requirements to obtain independent cost estimates for the UPF, and how NNSA has used information from those estimates; and (3) the extent to which NNSA has made progress in developing uranium program management information since GAO's September 2017 report. GAO reviewed project and program documents on planning, schedule, cost, and implementation, and interviewed program officials. What GAO Found National Nuclear Security Administration (NNSA) documents and officials reported that the new Uranium Processing Facility (UPF) is on schedule and within budget. As of December 2019, three of the seven UPF subprojects were complete, and four were ongoing. NNSA officials told GAO they estimate that construction of the UPF will be complete in 2022 and that they expect to meet NNSA's goal of completing the UPF project for $6.5 billion by the end of 2025. As required, NNSA and its contractor developed a plan for starting operations at the UPF, which officials stated will likely occur in 2026. According to NNSA's plan, attaining full UPF operational capability will be the final step to enable NNSA to stop certain operations in Building 9212—the oldest building with the highest nuclear safety risk at the Y-12 National Security Complex (Y-12)—and turn it over to the Department of Energy (DOE) for final disposition by 2035. In managing the UPF project, NNSA obtained independent cost estimates for the four largest UPF subprojects whose total estimated costs exceeded $100 million. Such estimates are required by DOE policy and to satisfy limitations in appropriations laws. Moreover, based on its review of NNSA documents, GAO found NNSA used those estimates to help inform the UPF's approved cost and schedule baseline estimates. NNSA officials stated that they used information from the independent cost estimate and other sources to help negotiate remaining work with the contractor and finalize the overall UPF's baseline estimates before starting construction. Since GAO last reported on NNSA's broader uranium program in September 2017, NNSA identified and made progress in implementing the uranium program's scope of work that includes capabilities and other activities that are not part of the UPF project but are needed for weapons program. Specifically, NNSA made progress in the following areas: 1. developing process technologies that are expected to increase the efficiency and effectiveness of certain uranium processing capabilities; 2. investing in infrastructure to extend the operational lives of older uranium facilities; and 3. reducing the amount of uranium stored and used in these older uranium facilities. NNSA has also made progress in implementing GAO's 2017 recommendation to develop key management information for the uranium program. Specifically, NNSA developed an integrated master schedule covering the scope of work for the program through fiscal year 2035 and a life-cycle cost estimate that includes program costs through fiscal year 2026. NNSA estimated that, in addition to completing the UPF project for $6.5 billion, the uranium program will spend over $850 million from fiscal years 2016 through 2026 to support modernizing other needed uranium processing capabilities and transitioning out of Building 9212.
gao_GAO-19-428
gao_GAO-19-428_0
Background VA provides or pays for nursing home care through three separate programs, one for each of the nursing home settings in which VA provides or pays for care. In general, the three settings provide similar nursing home care, in which veterans receive skilled nursing care, recreational activities, and other services. However, some of the nursing homes may provide care to veterans on a short-term basis, such as rehabilitation after a hospitalization for a period of 90 days or less (“short stay”), or on a long-term basis, which is a period of 91 days or more (“long stay”). Further, officials told us that some of these homes may also provide certain special needs care for a limited number of residents, such as dementia or rehabilitative care, which may require additional specialized equipment or trained staff. Federal oversight of care provided to veterans within the three settings is conducted by VA only or a combination of VA and CMS. See table 1 for key characteristics on the three nursing home settings. Depending on a veteran’s eligibility status, VA pays the full or partial cost of nursing home care in each setting. For example, VA is required by law to provide the full cost of nursing home care for veterans who need nursing home care for a service-connected disability—which is an injury or disease that was incurred or aggravated while on active duty—and for veterans with service-connected disabilities rated at 70 percent or more. For all other veterans, VA provided nursing home care is based on available resources. Veterans and their families are responsible for making decisions about nursing home care that will best meet their needs. At the national level, VA provides information about nursing homes on its Access to Care website; according to VA, the website is intended to help inform veterans and their families’ about the quality of care in nursing homes. According to VA central office officials, the responsibility for helping veterans make decisions about nursing home care is decentralized to local VAMCs. In consultation with veterans and their families, VAMC social workers and clinical care providers can discuss factors such as the veteran’s eligibility for care in each setting, health needs, the type of care provided at different homes, space availability, and the veteran’s geographic preference. VAMC staff may also encourage veterans to take a tour of the prospective home. Oversight of Nursing Home Quality VA models its oversight of nursing home services provided to veterans on the methods used by CMS. CMS defines the quality standards that approximately 15,600 nursing homes nationwide must meet in order to participate in the Medicare and Medicaid programs. To monitor compliance with these standards, CMS contracts with state survey agencies to conduct inspections of each home not less than once every 15 months. During these inspections the state survey agency might identify deficiencies—or instances in which the nursing home does not meet an applicable quality standard. To address identified deficiencies, CMS generally requires nursing homes to implement corrective action plans. CMS also monitors—by conducting observational assessments of state agencies during inspections or conducting its own comparison inspections on a sample of homes each year—the state agencies that inspect CNHs to ensure that these inspections accurately identify whether the homes meet quality standards. In addition, CMS collects data on various clinical quality measures and calculates nursing home staffing ratios. CMS assigns each nursing home ratings in three components— inspections, quality measures, and staffing ratios—and an overall quality rating. CMS places the greatest weight on inspections in its calculations of each home’s overall quality rating. CMS publicly reports a summary of the information it collects on the quality of nursing homes on its Nursing Home Compare website, which uses a five-star quality rating system. As we previously reported, this website facilitates public comparison of nursing home quality. Within VA central office, the Office of Geriatrics and Extended Care is responsible for overseeing the quality of nursing home care provided to veterans in each of the three settings—CLCs, SVHs, and CNHs. The key mechanism VA uses to assess quality in each of these settings is regular inspections—generally occurring annually—that determine the extent to which homes meet relevant quality standards. VA’s use of inspections and other methods to ensure the quality of care in each of the three nursing home settings differs: CLCs. VA owns, operates, and oversees the quality of CLCs, and conducts regular unannounced inspections to determine the extent to which CLCs meet quality standards. VA central office contracts with the Long Term Care Institute to conduct these inspections, and VA central office reviews the results of all inspections. CLCs receive an initial inspection when they open and then periodic, unannounced inspections thereafter. The frequency of these inspections depends on the number and severity of deficiencies identified during the prior year’s inspection, but they generally occur every 11 to 13 months. CLCs are required to develop and implement corrective action plans for each deficiency identified that detail how it will be addressed. VA central office approves these plans, and the VISN and VA central office monitor the CLC’s actions until each deficiency is addressed. Per VA’s contract, VA monitors the Long Term Care Institute to ensure that inspections are conducted within required timeframes and to conduct quarterly assessments of the contractor’s performance, among other things. In addition, for each CLC, VA also collects information on quality measures and staffing ratios and uses this information, along with the inspection results, to assign a star rating from 1 to 5 stars. In June 2018, VA central office consolidated the ratings for all of the individual CLCs—modeled after CMS’s Nursing Home Compare—into its Access to Care website. SVHs. States own and operate SVHs and, as a result, in most cases SVHs are inspected by state agencies to determine the extent of their compliance with state requirements. About two-thirds of SVHs are inspected by CMS; however, VA is the only entity that conducts annual inspections for all SVHs. Although, VA does not exercise any supervision or control over the administration, personnel, maintenance, or operation of any state home, VA conducts these annual reviews for all SVHs and is prohibited from making payments to SVHs until it determines that they meet applicable quality standards. VA central office contracts with Ascellon to conduct these inspections and reviews the results of the inspections. The inspections first occur when an SVH initially seeks to become eligible for VA payments, and, once the SVH is eligible, unannounced inspections occur on an annual basis to verify that an SVH is eligible to continue to receive VA payments. For these annual inspections, the contractor generally cites deficiencies when SVHs are not in compliance with applicable quality standards. SVHs develop and implement corrective action plans for each deficiency identified, and the VAMC director approves the plan. VA should monitor the contractor’s performance annually, for example, to ensure that inspections are conducted within certain timeframes. VA’s Office of Geriatrics and Extended Care maintains a database of all corrective action plans, and VISN and VAMC staff monitor the SVHs’ actions until each deficiency is addressed. VA also collects VA prescribed quality measure and staffing data from SVHs as part of its survey process. However, VA does not currently assign a quality rating to SVHs. CNHs. CNHs can be publicly or privately owned and operated, and, CMS provides federal oversight for all CNHs that receive Medicare or Medicaid payments. VA requires CNHs under contract to be certified by CMS, and, unlike the other two settings, VA is not required to conduct regular inspections of CNHs. Instead, VA requires VAMC staff to conduct veteran care assessments on a monthly basis and annually review information CMS collects on the homes’ quality, including CMS inspection results, to evaluate whether to initiate or continue a contract with a CNH. The annual reviews use seven criteria established by VA’s Office of Geriatrics and Extended Care, including whether the CNH’s total number of health deficiencies from the most recent CMS inspection is twice the average of the state in which it is located. According to VA officials, CNHs that fail to meet four out of VA’s seven criteria during the annual reviews of CMS data are excluded from participation in its CNH program unless the VAMC seeks a waiver from VA central office to allow the home to participate. If VAMC staff are considering seeking a waiver to allow a CNH to continue participating in the CNH program, or have any other concerns about a home, they have the option of conducting their own onsite reviews of the home to assess care quality. Utilization of and Expenditures for VA Nursing Home Care Increased from Fiscal Year 2012 through 2017, with Larger Increases Expected in Future Years Utilization of VA Nursing Home Care Our analysis of VA data shows that veterans’ utilization of VA nursing home care—across CLCs, SVHs, and CNHs—increased 3 percent from fiscal year 2012 through 2017, from an average daily census of 37,687 to 38,880 veterans. VA projects that nursing home utilization will increase another 16 percent, to an average of 45,279 per day by fiscal year 2022, with varying increases projected for each of the nursing home settings. (See fig. 1.) Moreover, VA projects that overall demand for VA nursing home care will continue to increase through 2034, driven by the aging of the cohort of Vietnam War veterans. VA projects that Vietnam veterans will increasingly rely on VA’s health care system for care and will use more health care services, including nursing home care. As figure 1 shows, SVHs accounted for the largest percentage (53 percent) of the average number of veterans who received nursing home care each day in fiscal year 2017. However, the number of veterans in CNHs has increased and is projected to continue to increase. For example, the average number of veterans receiving nursing home care in CNHs increased 35 percent from fiscal year 2012 to 2017, from an average of 6,875 to 9,251 per day. Over the same period, the number of veterans in CLCs fell 9 percent, and in SVHs it fell 1 percent. VA officials told us that they are prioritizing the use of CLCs for short-term care, and that CNHs have the greatest capacity to meet the future long-term needs of veterans. VA projects that by 2034 the number of veterans receiving nursing care in these homes will exceed 17,000. In addition, VA projects that demand for nursing home care in CLCs and CNHs will decrease after 2034, and VA has not projected care in SVHs beyond 2022. VA officials also said that VA has limited flexibility to expand the number of beds in CLCs and SVHs to accommodate the projected number of veterans needing care. While VA expects to continue placing more of the veterans needing nursing home care into CNHs, officials noted some challenges contracting with these homes. Specifically, VA central office officials said that about 600 CNHs had decided to end their contracts with VA over the last few years for a variety of reasons. For example, officials from four of the VAMCs we interviewed told us about CNH concerns that contract approvals can take 2 years, homes have difficulties meeting VA staff requirements, and VA’s payment rates were very low. Officials said provisions in the VA MISSION Act of 2018 may alleviate some of these difficulties. Specifically, the Act consolidates various VA community care programs into the Veterans Community Care Program and authorizes VA to enter into veterans care agreements with certain providers, including nursing homes. In contrast to contracts, such agreements may not require providers to meet certain wage and benefit requirements. Officials told us that they are in the process of replacing CNH contracts with veterans care agreements, which may alleviate some of those challenges. In addition, VA officials told us that most nursing homes—including homes in each of the three settings—have limited capacity to serve veterans with special needs, such as those needing dementia, ventilator, or behavioral care. For example, they said that homes may not have any of the necessary specialized equipment or trained staff, or may not have as many of these beds as needed, to meet certain veterans’ special care needs. VA officials told us that they are working to expand the availability of special needs care in each of the three settings. Expenditures for VA Nursing Home Care Our analysis of VA data also shows that VA nursing home care expenditures have increased in recent years, reflecting increases in the number of veterans receiving such care. Specifically, VA’s nursing home expenditures across all three settings increased 17 percent from fiscal years 2012 through 2017, from $4.9 billion to $5.7 billion. These expenditures are expected to increase to $7.3 billion in fiscal year 2022 as utilization is projected to increase. VA officials told us that expenditures for nursing home care are projected to increase due to the rising costs of care as well as higher utilization of services. (See fig. 2.) Of the three settings, CLCs accounted for the largest share of VA nursing home expenditures; however, this reflects differences in the costs of care and the extent to which VA pays for these costs in each of these settings: For CLCs, VA pays the full cost of care for veterans in these homes and, according to VA officials, VA expenditures for care provided in CLCs are greater compared to the other settings, because CLCs are able to provide acute care that requires higher staffing levels and more specialized equipment. In addition, VA officials indicated that CLC expenditures also include the overhead costs of being associated with VAMC hospitals. For SVHs, 80 percent of veterans receive VA’s partial daily rate that covers only about a quarter of their care costs. For example, in fiscal year 2017, VA’s average SVH per diem was $106 for veterans without eligible service connected disabilities. VA also pays the full cost of care for the remaining 20 percent of veterans with service-connected disabilities. In fiscal year 2017, the full rate for these veterans was $397 per day. For CNHs, VA pays the full cost of care for veterans; however, more of these veterans receive long-term care, at a lower cost per day, than the short-term care that many veterans receive in CLCs, such as for rehabilitation after surgery, at a higher cost per day. As a result of these differences, in fiscal year 2017, VA paid, on average, $1,074 per day per veteran for care in CLCs, $268 for CNHs, and $166 for SVHs. VA Contractors Completed Required Nursing Home Inspections, but VA Has Opportunities to Enhance Its Oversight of the Process During the contract year completed in 2018, VA’s two contractors conducted the required annual inspections of CLCs and SVHs to determine the extent to which the homes met quality standards. However, VA has opportunities to enhance its oversight of the contractors’ inspections by regularly monitoring both contractors’ performance inspecting CLCs and SVHs through observational assessments and by citing all SVH deficiencies. Although VA’s plans call for quarterly observational assessments, they have not been consistently conducted and documented. Similarly, VA has not provided guidance for the optional onsite reviews of CNHs that VAMCs may perform thus limiting their potential impact. VA’s CLC Contractor Conducted Required Annual Inspections, but VA Did Not Conduct Quarterly Monitoring of Contractor Performance Our review found that during the contract year completed in 2018, VA’s CLC contractor performed the required annual inspections for 126 CLCs. (See table 2.) Through these inspections, VA’s contractor determined the extent to which each CLC met applicable quality standards and issued deficiencies when standards were not met. The most common areas of deficiencies were those in which 1) the facility did not provide quality care for its residents, for example, in its treatment and prevention of pressure ulcers or managing its residents’ pain; 2) the facility did not adequately prevent and control infections, for example, by providing residents influenza and pneumococcal immunizations; and 3) the facility did not provide adequate care and services to sustain the highest possible quality of life for its residents, for example, by providing residents unable to carry out activities of daily living with adequate assistance to maintain good nutrition, grooming, and personal and oral hygiene. (See appendix I for more information on the types of deficiencies identified.) To address deficiencies, VA required CLCs to produce corrective action plans and tracked the CLCs’ progress until the deficiencies were resolved. In addition, for some of the most common deficiencies among CLCs, VA officials said VA took steps such as developing additional VAMC policies to facilitate improvement. For example, to reduce the number of CLC deficiencies related to pain management and improve CLCs’ performance in this area, VA officials said they developed specific guidelines for CLCs to use to assess pain in patients with dementia who were unable to provide numeric pain scores. While VA has monitored and determined that CLC inspections occurred as stipulated in its contract and tracked the results of the inspections, it has an opportunity to enhance its oversight. According to its contract, VA will monitor contractor performance on a quarterly basis, and VA central office officials told us their intention has been to meet this stipulation by observing the contractor as it conducts some inspections—an approach consistent with CMS’s inspection oversight process. However, VA officials told us that they have not been completing these observations quarterly and did not conduct any observations for the April 2017 to April 2018 contract year. VA officials said they had not performed this quarterly observation due to competing demands. For example, the three- person team at VA central office responsible for CLC oversight has overseen a number of recent initiatives, including the rollout of CLC quality ratings in 2018. Officials also told us they conducted one observation for the current contract year in December 2018 (during the course of our review). However, we were not able to confirm the December 2018 observation or any other observations of the CLC inspections because VA has not documented the results. A VA official said that developing an approach for documenting the quarterly observations is something VA needs to work on. VA’s failure to monitor the CLC contractor’s performance through observational assessments is inconsistent with its own goals of assessing the contractor’s performance quarterly and modeling its oversight after CMS’s approach to its own contractors’ inspections. It is also inconsistent with federal internal control standards that state that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. By not conducting these quarterly observations for more than a year, VA does not know whether, or to what extent, the contractor is effectively assessing CLC compliance with quality standards and is unable to hold the contractor accountable for its inspections. Without effective monitoring of the contractor’s performance inspecting CLCs, VA risks that quality concerns in some CLCs could go overlooked, placing veterans at risk. VA’s SVH Contractor Conducted Required Annual Inspections of SVHs; VA Has Opportunities to Enhance This Oversight Our review found that during the contract year completed in 2018, VA’s SVH contractor performed required annual inspections for all 148 SVHs. (See table 3.) As with CLCs, VA’s SVH contractor determined through these inspections the extent to which each SVH met applicable quality standards and cited deficiencies when they were not met. The most common areas of deficiencies were those in which 1) the facility’s physical environment did not adequately protect the health and safety of its residents, for example, by ensuring their safety from fires; 2) the facility did not provide quality care for its residents, for example, by adequately managing their pain; and 3) the facility did not assess residents’ health sufficiently, for example, within 14 days of residents’ admission and on an annual basis thereafter. (See appendix II for more information on the types of deficiencies identified.) To address deficiencies, VA required SVHs to produce corrective action plans and tracked the SVH’s progress until they were resolved. In addition, VA officials said they took steps to address deficiencies common among SVHs. For example, to reduce SVH deficiencies related to physical environment standards for fire safety and improve SVH performance in this area, VA central office staff told us they held SVH town halls with a fire safety engineer and created reference guides for SVH administrators about regulatory changes in fire safety codes. However, while VA has monitored that its contractor conducted the required SVH inspections and tracked the results of these inspections, VA has not monitored the SVH contractor’s performance of these inspections through regular observational assessments to ensure that contractor staff effectively determine whether SVHs are meeting required standards. Specifically, VA officials told us they intended to observe the SVH contractor’s inspections on a quarterly basis, which would be consistent with VA’s approach to CLCs and its goal of modeling its oversight on CMS’s. VA officials told us that although they have a goal of performing this monitoring on a quarterly basis; they could not recall when VA last observed the SVH contractor’s inspections. When asked, VA officials did not provide specific reasons why they had not performed the observational assessments; in prior discussions, these officials noted that VA’s oversight of SVHs is less involved than its oversight of CLCs because VA does not exercise any supervision or control over the administration, personnel, maintenance, or operation of any state home. However, VA pays for veterans to receive care in SVHs, and states that oversee these homes may or may not conduct their own oversight. Furthermore, as CMS conducts oversight of only those SVHs that receive Medicare or Medicaid payments (about two-thirds of all SVHs), for some SVHs, VA is the only federal agency with oversight over the quality of those homes care. For example, VA is the only entity that conducts regular inspections of SVHs in Missouri and New Hampshire. VA is missing another opportunity to enhance its oversight of SVHs by not requiring the SVH contractor to identify all failures to meet quality standards as deficiencies during its inspections. While CMS requires its inspectors to cite all deficiencies, VA directed its contractor to cite low- level deficiencies—deficiencies considered by the contractor to pose no actual harm but with potential for minimal harm—as “recommendations” rather than deficiencies. For example, during one SVH inspection, the contractor recommended that “to ensure nutritional adequacy, the facility should follow the menus, which are planned in advance.” VA officials told us that unlike deficiencies, they do not track or monitor the nature of the recommendations or whether the recommendations have been implemented. In contrast, state survey agencies under contract with CMS are required to cite all failures to meet quality standards as deficiencies. In addition to not citing recommendations as deficiencies, according to the VA contractor’s 2016-2017 annual summary report, SVHs can fix issues identified by the SVH contractor while the inspectors are still onsite to avoid being cited on the inspection. As a result, these issues are also not documented as deficiencies. Officials at four of the six SVHs we interviewed specifically reported being able to make on-site corrections to avoid being cited for deficiencies—for instance, officials at one SVH told us that the SVH was able to relocate handwashing stations before the end of the inspection in order to avoid being cited for a deficiency by the VA inspectors. According to VA, VA does not require its SVH contractor to identify all failures to meet quality standards as deficiencies in its inspections, VA officials said this practice reflects policy and a negotiated position with SVHs. VA officials reiterated that because SVHs are owned and operated by the states, VA is less involved with their oversight than CLCs. Our review of the VA contractor’s annual summary report showed that almost 50 percent of SVHs inspected between August 2017 and July 2018 (the contract year completed in 2018), zero deficiencies were identified through inspections. VA officials cited VA’s ‘collegial approach’ and willingness to make onsite corrections as factors contributing to the decline in recent years. Furthermore, while VA and CMS subject SVHs to slightly different standards, our review of VA and CMS inspection reports from a sample of five SVH inspection reports shows that VA identified a total of seven deficiencies and made four recommendations from these homes. In contrast, CMS identified a total of 33 deficiencies for these homes for approximately the same time period. By not performing observational assessments of SVH inspections, VA does not know whether, or to what extent, VA’s contractor needs to improve its ability to identify SVHs’ compliance with quality standards, which increases the possibility that quality concerns in some SVHs could go overlooked, potentially placing veterans at risk. Further, by not requiring the contractor to cite all failures to meet quality standards as deficiencies on its inspections, VA does not have complete information on deficiencies identified at SVHs and therefore cannot track this information to help identify trends in quality across these homes. Further, it is inconsistent with federal internal control standards that state that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. Selected VAMCs Completed Required Annual Reviews, but Conducted Optional CNH Onsite Reviews without the Benefit of Guidance We found that in 2017 the six selected VAMCs annually reviewed CMS data on the quality of all the CNHs with which they contract, which is a VA requirement. Specifically, the VAMCs reviewed the CMS data to determine whether the CNHs met VA criteria for contract renewal. (See table 4.) The top three criteria from the annual reviews that VAMCs failed to meet were 1) whether total registered nursing staff ratios per resident day fell below the state average, 2) whether total nursing staff ratios per resident day fell below the state average; and 3) whether six or more of selected CMS quality measures fell above the state average. In addition, we found that all six of our selected VAMCs conducted their own onsite CNH reviews—which, according to VA policy, VAMC officials have the option of performing if they have quality concerns about CNHs with which they contract or are determining whether to seek a waiver. The CNH onsite reviews conducted by these VAMCs focused on many of the categories for quality standards, such as food and nutrition services, quality of care, quality of life, and physical environment. While conducting onsite reviews of CNHs is optional under VA policy, officials at many of the VAMCs we interviewed told us that these onsite reviews—which the VAMCs we interviewed referred to as CNH inspections—are valuable in conducting CNH oversight as they provide important information about a home’s quality that VAMC staff would not have known otherwise. For example, officials from one VAMC shared with us results from an onsite review in which they found moldy and expired food in a CNH’s kitchen— food storage had been identified as an issue during a previous state survey for CMS and was purported to have been corrected 5 months prior. Furthermore, some VAMC staff said that they would suspend placement of veterans in certain CNHs and may not renew a CNH contract based on their findings from these onsite reviews. However, VA could strengthen its support for the optional onsite reviews by providing guidance to VAMC staff conducting these reviews. Officials at some VAMCs expressed concerns that VA did not provide the guidance they needed to conduct the optional onsite reviews, and that they would like to have more information from VA’s central office. As one VAMC official said, “without training or guidance from VA , it is difficult for VAMC staff, especially new staff, to know how to conduct these inspections.” VAMC officials at the six selected VAMCs told us that in the absence of guidance from VA, they had each independently developed their own tools and processes. Furthermore, officials at these VAMCs had differing understandings of the steps they can take if they identify quality concerns during onsite reviews. For example, staff at some VAMCs required CNHs to write corrective action plans and monitored the CNHs’ implementation until the deficiencies were addressed; in contrast, staff at other VAMCs did not monitor implementation, because they did not think they had the authority to hold CNHs accountable to correct deficiencies they identified. VA central office officials who oversee the CNH program told us that they do not provide training or guidance because CMS and the states, not VA, are responsible for regulating the quality of care in these nursing homes. However, in the absence of guidance from VA central office on the optional CNH onsite reviews—guidance that could be developed, for example, by collecting and disseminating best practices—VA has missed an opportunity to leverage efficiencies across VA’s network of VAMCs and empower VAMC officials with knowledge about the steps they can take to hold CNHs accountable for correcting problems. Furthermore, it is inconsistent with federal internal control standards that state that management should design control activities to achieve its objectives—in this case, to ensure that VAMCs contract with CNHs that provide high quality care. VA Publicly Provides Information on Care Quality for Only Two of Its Three Nursing Home Settings As part of its efforts to help veterans find placement into a nursing home, VA publicly provides information on care quality for CLCs and CNHs through its Access to Care website, but VA does not provide information on the quality of SVHs. Specifically, the website allows users to enter a location—such as a city and a surrounding distance—to produce a map with a list of CLCs and VA-contracted CNHs in their preferred area (see fig. 3). For each of the homes on the list, VA reports quality information it collects through its own inspections for CLCs and information CMS collects for CNHs. As previously noted, veterans and their families are responsible for making decisions about the nursing home care that will best meet their needs. Their decision-making can be aided by discussions with VAMC staff and information provided on VA’s Access to Care website, among other sources. The ability for veterans and their families to access information on nursing home quality through the Access to Care website—such as the currently available quality information on CLCs and CNHs—is particularly critical as VAMC officials do not always discuss quality information in their consultations with veterans and their families. As figure 3 shows, VA’s Access to Care website does not provide any information to the public about the quality of the 148 SVHs that provide nursing home care. Specifically, VA does not currently provide any information on SVHs on its Access to Care website—including information on the location of SVHs or CMS information on care quality that VA could easily provide on SVHs using information obtained from CMS’s website, Nursing Home Compare, as VA does now for CNHs. VA has explored activities that could provide veterans and their families with information about SVHs. For example, as stated in VA’s SVH strategic plan for fiscal years 2017 to 2022, VA considered an initiative to create a five-star program for SVHs. Additionally, VA has collaborated with SVHs to produce some data on quality measures. For example, during the course of this review, VA provided to us a quality measures report for SVHs by state that they developed in partnership with the National Association of State Veterans Homes. VA is able to develop this information because it has access to information on SVH quality—in fact, as the only entity that conducts regular inspections, it is the only source for quality information on all SVHs. Specifically, VA collects VA prescribed inspection, quality measure, and staffing data as part of its survey process that could be used to develop and distribute quality information for each home. Some of this information is available to the public at the local level, but it is not currently provided by VA. For example, SVHs are required to make the results of the most recent VA inspection of the home available for examination in a place accessible to residents. According to VA officials, there is no requirement to provide information on SVH quality on the Access to Care website, as SVHs are owned and operated by the states. However, the website is an important tool for veterans and their families to help inform their decision making on nursing home placement. VA has stated goals to provide useful and understandable information to veterans. The VA website could be the only readily accessible source of quality care information publicly available to veterans and their families for certain SVHs. As the SVH strategic plan indicates, VA sees the value in developing SVH ratings that could be used to provide quality information to veterans and their families. Furthermore, officials from three of the SVHs we spoke with told us that they supported having quality information available about their homes that would allow comparisons between SVHs or between SVHs and other homes, such as information contained in Nursing Home Compare. Without information about SVHs on VA’s Access to Care website, veterans and their families are limited in their ability to effectively evaluate all of their options when selecting a nursing home. Our prior work has shown that effective transparency tools—such as websites that allow consumers to compare the quality of different providers—provide highly relevant information to consumers. However, the limited information VA provides on its Access to Care website is inconsistent with VA’s articulated commitment to veteran-centric care, a component of which is ensuring that veterans are well informed about their options for care. The website’s limited information is also inconsistent with federal internal control standards, which state that management should externally communicate the necessary quality information to achieve an entity’s objective—in this case, providing important information to veterans on the quality of nursing homes. Action to inform veterans about the quality of SVHs would better enable veterans and their families to compare the quality of their nursing home care options across all three settings. Conclusions In the coming years, VA projects an increase in the number of veterans receiving nursing home care. This makes it particularly important that VA ensure veterans receive quality care, regardless of the setting—CLC, SVH, or CNH—in which this care is provided. Inspections are a key oversight tool used to ensure veterans receive quality care. VA relies primarily on annual inspections to oversee the quality of nursing home care at CLCs and SVHs, and our review shows that VA’s two contractors conducted these required inspections during the period we reviewed. However, our review also shows that VA has opportunities to enhance this oversight. First, VA has not regularly monitored the contractors’ performance conducting these inspections by conducting observational assessments as intended and therefore does not know whether the contractors need to improve their ability to determine the homes’ compliance with quality standards. Second, VA does not require inspectors of SVHs to identify all failures to meet quality standards as deficiencies, which limits VA’s ability to track all deficiencies identified at SVHs and identify trends in quality across homes. Third, VA has not provided guidance for VAMC staff for instances in which they may conduct onsite reviews of CNHs directly. As a result, VA has missed an opportunity to leverage efficiencies across VA’s network of VAMCs and empower VAMC officials with knowledge about the steps they can take to hold CNHs accountable for correcting problems. By making enhancements to its oversight of inspections across all three settings, VA would have greater assurance that the inspections are effective in ensuring the quality of care within each setting. VA also seeks to ensure that each veteran chooses a nursing home placement that best meets his or her preferences and needs. To enable veterans to evaluate their care options, VA uses its Access to Care website. However, this website provides no information about SVHs, which is where most veterans are currently receiving VA-funded nursing home care. Since VA is the only entity that inspects and collects quality information on all SVHs, VA possesses quality information that is not available elsewhere. However, because VA’s website lacks information on the quality of SVHs, veterans and their families are limited in their ability to compare the quality of the available nursing home care options. Recommendations for Executive Action We are making the following four recommendations to the Veterans Health Administration: The Under Secretary of Health should develop a strategy to regularly monitor the contractors’ performance in conducting CLC and SVH inspections, ensure performance results are documented and any needed corrective actions are taken. (Recommendation 1) The Under Secretary of Health should require that all failures to meet quality standards are cited as deficiencies on SVH inspections. (Recommendation 2) The Under Secretary of Health should develop guidance for VAMC staff conducting optional onsite CNH reviews. (Recommendation 3) The Under Secretary of Health should provide information on the quality of all SVHs that is comparable to the information provided on the other nursing home settings on its Access to Care website. (Recommendation 4) Agency Comments VA provided written comments on a draft of this report, which are reprinted in appendix III. In its written comments, VA generally concurred with all four recommendations. With respect to our recommendation on regularly monitoring contractor performance in conducting CLC and SVH inspections, VA concurred and stated they would develop a procedure for observational inspections. VA also concurred with our recommendation requiring all failures to meet quality standards to be cited as deficiencies on SVH inspections and that “any regulation assessed to be incompliant at the time of the survey will be rated as either provisional or not met, which requires a corrective action plan from the SVH.” VA concurred in principle with our other two recommendations and described actions it plans to take to address them. Specifically, regarding our recommendation to develop guidance for VAMC staff conducting optional CNH onsite reviews, VA stated that it will issue a memo to clarify and provide guidance related to CNHs. VA also noted that, although we found the VAMC staff we interviewed discussed and considered these onsite reviews “inspections,” VA does not. Based on these technical comments, we adjusted our terminology. Further, we reiterate the value that VAMC officials placed on these reviews for assessing the quality of care veterans receive in the report. Accordingly, we believe that VA has the opportunity when developing the memo to clarify and provide guidance related to these optional CNH onsite reviews. With respect to our recommendation that VA provide information on the quality of all SVHs that is comparable to the information provided on the other nursing home settings, VA stated it plans to evaluate the feasibility of providing SVH data. VA noted challenges with developing their own five star ratings for SVHs since VA does not have all the required data for SVHs that is needed. We acknowledge that developing comparable information will take time and have adjusted some language in our report to reflect that VA had considered developing an SVH five-star program. VA also stated that we inaccurately portrayed VA’s oversight authority, because each state oversees its own SVH and VA does not have the authority to regulate the business or clinical practices of the SVH. Both our draft and final reports stated that “VA does not exercise supervision or control over the administration, personnel, maintenance, or operation of any state home.” However, as stated in the report, federal law prohibits payments to SVHs that do not meet standards the VA prescribes and authorizes VA to inspect any SVH at such times as VA deems necessary to ensure that such facility meets those standards. Further, we reiterate that as VA is the only entity to conduct inspections for all SVHs—it uniquely possesses information that is not available elsewhere. Accordingly, we believe that VA has the opportunity to help veterans and their families by providing quality information for SVHs as it does for the other nursing home settings. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or silass@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Types of Deficiencies Identified from Community Living Center (CLC) Inspections, 2017 to 2018 Number of deficiencies (percent) 1 (0) Admission, Transfer, and Discharge Rights 0 (0) 1 (0) 19 (3) 87 (15) 1 (0) 14 (2) 3 (1) 1 (0) 279 (48) 67 (12) 51 (9) Appendix II: Types of Deficiencies Identified from State Veterans Home (SVH) Inspections, 2017 to 2018 Number of deficiencies (percent) 6 (3) 4 (2) 0 (0) 2 (1) 93 (48) 1 (1) 36 (19) 4 (2) 33 (17) Resident Behavior and Facility Practices 10 (5) 3 (2) 0 (0) 192 The total number of deficiencies may include deficiencies from one SVH that VA does not consider a skilled nursing facility. Appendix III: Comments from the Department of Veterans Affairs Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Karin Wallestad (Assistant Director), Jim Melton (Analyst-in-Charge), Kye Briesath, Krister Friday, and Mandy Pusey made key contributions to this report. Also contributing were Vikki Porter and Jennifer Whitworth.
Why GAO Did This Study VA provides nursing home care for veterans whose health needs are extensive enough to require skilled nursing and personal care in an institutional setting. VA provides or pays for the cost of nursing home care for eligible veterans. GAO was asked to examine VA nursing home care. In this report, GAO 1) describes utilization of and expenditures for VA-funded nursing home care, 2) examines VA's use of inspections to assess the quality of nursing home care and its oversight of the process, and 3) examines the information VA publicly provides through its website on the quality of nursing home care. To perform this work, GAO reviewed VA policies and information on inspections and interviewed VA officials. GAO also selected six VA medical centers based on factors such as their participation with CLCs, SVHs, and CNHs and location. For each, GAO interviewed medical center officials and officials from corresponding VA regional offices, CLCs, SVHs, and CNHs. What GAO Found According to the Department of Veterans Affairs (VA), veterans' use of nursing home care increased 3 percent, from an average daily census of 37,687 to 38,880 veterans, from fiscal years 2012 to 2017. VA projects that use will increase 16 percent from fiscal years 2017 to 2022 with the aging of Vietnam War veterans. VA's nursing home expenditures increased 17 percent (8 percent adjusted for inflation), from $4.9 billion to $5.7 billion, from fiscal years 2012 to 2017. During the contract year completed in 2018, VA contractors conducted required inspections of community living centers (CLC) (VA-owned and -operated) and state veterans homes (SVH) (state-owned and -operated) to ensure they complied with quality standards. Selected VA medical centers also completed required annual reviews of Centers for Medicare & Medicaid Services data and conducted optional onsite reviews for community nursing homes (CNH), with which VA contracts. However, VA has opportunities to enhance its oversight. For example, VA did not conduct the quarterly monitoring of contractor performance as stipulated in its contract for CLC inspections from April 2017 to April 2018. VA officials also said they intended to regularly observe contractors conducting inspections to ensure they effectively determine compliance with standards, but have not done so due to competing demands. Officials also said they had performed these observational assessments in the past but were unable to provide documentation of them occurring. Conducting and documenting the quarterly observational assessments would allow VA to identify areas for improvements and to take any needed corrective actions. VA's Access to Care website provides publicly available information about the quality of CLCs and CNHs based on inspections. Veterans and their families can use the website to help inform their decisions on nursing home placement. However, the website does not include any SVH information. Although VA has access to SVH quality information, according to VA officials, they are not required to publicly report it. For some SVHs, VA is the only source for quality care information. Some of the quality information is available locally, but the VA website is an important tool for veterans and their families. Providing SVH information on its website could enhance veterans and their families' ability to evaluate all nursing home options. What GAO Recommends GAO is making four recommendations, including recommendations for VA to enhance its oversight of the quality of care provided to veterans in CLCs, SVHs, and CNHs and include on its website information on the quality of care for SVHs that is comparable to what it provides on CLCs and CNHs. VA concurred with two recommendations and concurred in principle with two recommendations.
gao_GAO-20-298
gao_GAO-20-298_0
Background U.S. Airport System The United States has more than 19,000 airports, which vary substantially in size and the type of aviation services they support. Of these, roughly 3,300 airports are designated by FAA as part of the national airport system and are therefore eligible for federal assistance for airport capital projects. The national airport system consists of two primary types of airports—”commercial service” airports—which are publicly owned, have scheduled service, and board at least 2,500 or more passengers per year—and “general aviation” airports—which have no scheduled service and board fewer than 2,500 passengers. Federal law divides commercial service airports into various categories of airports, based on the number of passenger boardings, ranging from large hub airports to commercial service non-primary airports (see fig. 1). Consistent with our prior work, we have grouped airports into two broader categories: larger airports, which includes large and medium hubs, and smaller airports, which includes small hubs, non-hubs (also referred to as “non-hub primary”), and non-primary commercial service airports as well as reliever airports, general aviation airports, and new airports. The majority of passenger traffic is at larger airports, which accounted for 88 percent of all commercial airport enplanements in 2017. From 2013 to 2017, enplanements have increased at airports of all hub sizes. Specifically, commercial airport enplanements at larger and smaller airports increased by 16 percent and 15 percent, respectively, during this time period. Federal Grants National system airports are eligible to receive federal funding from AIP grants for infrastructure development. AIP funds are first authorized in FAA reauthorization acts, and Congress then appropriates funds for AIP grants from the Airport and Airway Trust Fund, which is supported by a variety of aviation-related taxes, such as taxes on tickets, cargo, general aviation gasoline, and jet fuel. While AIP grants are an important source for airports’ infrastructure funding, the amount of funding authorized for the AIP grant program has not changed since 2012. In 2018, Congress passed the FAA Reauthorization Act of 2018, which authorized annual AIP grant levels at $3.35 billion annually through fiscal year 2023 and authorized additional amounts for supplemental discretionary funding each year from 2019 through 2023, starting at $1.02 billion and increasing each year thereafter. In addition, the Consolidated Appropriations Act of 2018 appropriated $1 billion in supplemental annual funding from the general fund for the AIP discretionary grant program. Subsequently, in February 2019, the Consolidated Appropriations Act of 2019 provided $500 million from the general fund to the AIP discretionary grant program. The distribution of federal AIP grants is complex. It is based on a combination of formula funds—also referred to as entitlement funds—that are available to national system airports, and discretionary funds that FAA awards for selected eligible projects. Entitlement funds are apportioned by formula to airports and may generally be used for any eligible airport improvement or planning project. Discretionary funds are approved by FAA based on FAA selection criteria and a priority system, which FAA uses to rank projects based on the extent to which they reflect FAA’s nationally identified priorities. AIP grants must be used for eligible and justified projects, which are planned and prioritized by airports, included in their capital improvement plans, and reviewed and approved by FAA staff and the Secretary of Transportation. Generally, most types of airfield improvements—such as runways, lighting, navigational aids, and land acquisition—are eligible for AIP funding. AIP-eligible projects for airport areas serving travelers and the general public—called “landside development”—include entrance roadways, pedestrian walkways and movers, and common space within terminal buildings, such as waiting areas. See figures 2 and 3 for more information about the types of projects eligible for AIP funding. For all AIP-funded projects, the airport must provide a share of matching funds. The federal share is from 75 to 95 percent depending on the size of the airport or type of project. Passenger Charges Revenue from PFCs is another means of support for airport infrastructure projects. PFCs are federally authorized fees which were established in 1990 to help pay for infrastructure at commercial service airports. Although PFCs are local funds subject to the airport’s control, FAA oversees the PFC program and approves applications by airports to collect PFC revenues. PFCs are currently capped at $4.50 per flight segment with a maximum of two PFCs charged on a one-way trip or four PFCs on a round trip, for a maximum of $18 total. On behalf of the airports, airlines collect the PFC at the time of the ticket purchase and remit the PFC, minus an administrative fee, to the airport. To meet future planned infrastructure costs, airports have sought an increase in the cap on PFCs. However, airlines oppose a PFC increase because they believe airports already receive sufficient PFC revenues and that higher ticket prices could reduce passenger demand and airline revenues. We have previously reported that increasing the PFC cap would significantly increase PFC collections available to airports under three scenarios GAO modeled but could also marginally slow passenger growth and growth in revenues to the Airport and Airway Trust Fund (AATF). Airports have more flexibility in using PFCs to fund infrastructure projects as compared to AIP funding. Airport infrastructure projects eligible for PFC funding must meet one or more of the following: preserve or enhance safety, security, or capacity; reduce noise or mitigate noise impacts; or increase air carrier competition. Airports are able to fund projects with PFC revenues that might not be eligible for AIP funding, such as passenger terminal projects and development at gates, airline ticketing areas, and passenger check-in facilities at hub airports. In addition to being applied to FAA approved eligible projects, PFCs can be used as a match for AIP grants or to finance the debt on approved projects. Airports’ Costs for Planned Infrastructure Projects FAA and ACI-NA each produce reports summarizing 5-year estimates of U.S. airports’ infrastructure project costs. More specifically, FAA is required to publish a 5-year estimate of AIP-eligible development every 2 years. FAA provides this information in its NPIAS report. FAA relies on airports, through their planning processes, to identify individual AIP- eligible projects for funding consideration and inclusion in the NPIAS. The ACI-NA also collects data on all proposed capital development projects at U.S. airports and every 2 years publishes a report of U.S. airports’ 5-year infrastructure cost estimates. Airports Received an Average of about $15 Billion Annually for Infrastructure Development from a Variety of Sources, Including Grants and Revenue From fiscal years 2013 through 2017, national system airports received an annual average of about $15 billion in funding from a variety of sources for infrastructure development projects, including: federal AIP grants (about $3.2 billion annually); airport revenue from passenger charges (about $3.1 billion annually), and airport-generated revenue (about $7.7 billion annually); and capital contributions (about $715 million annually). These figures, however, do not represent the full amount of funding that is available to airports for infrastructure development. For example, some airports also received funding from state grants and bond proceeds through debt financing to fund airport infrastructure investments. In addition, the proportion of funding that larger and smaller airports received from these sources varies. Federal AIP and State Grant Funding Has Remained Relatively Constant From fiscal years 2013 through 2017, the total amount of AIP grants that national system airports received has generally remained constant. As shown in figure 4 below, the amount of AIP grant funding that airports received ranged from $3.1 billion to $3.3 billion annually for fiscal years 2013 through 2017. Overall, airports received an average of $3.2 billion annually in AIP grants. The total amount of AIP grant funding that FAA allocates to airports may vary slightly year to year for many reasons. For example, according to FAA, each year a small amount of AIP funding is returned from prior-year grants and the FAA is permitted to re-obligate those funds on either existing or new grants. Collectively, smaller airports received more AIP grant funding compared to larger airports during this time period. As shown in figure 4, from fiscal years 2013 through 2017, smaller airports received the largest share of AIP grant funding, approximately 75 percent, (an annual average of $2.4 billion), compared to 25 percent received by larger airports (an annual average of $812 million). Larger airports are generally able to rely on other sources of revenue generated from airport-generated revenue and PFCs due to higher enplanements compared to smaller airports. In addition, the amount of AIP grants’ funding that smaller hub airports received increased by about 10 percent between fiscal years 2013 through 2017, while the amount of AIP’s funding for larger airports decreased by 3 percent in the same time period. However, smaller airports receive less funding per AIP grant compared to larger airports. For example, smaller airports received an average of $897,000 per grant, while larger airports received an average of $5 million per AIP grant. Some airports also received state funding, primarily in the form of grants used as matching funds for federal AIP grants. Data for fiscal years 2013 through 2017 on states’ grant funding are not available. However, in 2015, we conducted a survey of airports, in collaboration with NASAO, for fiscal years 2009 through 2013, and reported that states provide an annual average of $477 million to national system airports. According to NASAO officials we interviewed for our current work, states’ grant-funding levels have remained unchanged. Airport Revenue—the Largest Source of Funding for Larger Airports—Has Gradually Increased From fiscal years 2013 through 2017, airports collected revenue from a variety of sources, including PFC charges and airport-generated revenue (both aeronautical and non-aeronautical), which have both increased during our 5-year time period. Some airports also received funding from capital contributions, but that amount has decreased from fiscal year 2013 through 2017. Airport revenue is the largest source of funding for larger airports. Specifically, larger airports generated an annual average of $10.4 billion in airport revenue (or 90 percent of all airport revenue) during our 5-year time period. Smaller airports generated less airport revenue, with an annual average of $1.2 billion (or 10 percent of all airport revenue), compared to larger airports. Larger airports’ ability to generate more airport revenue reflects that PFCs and airport-generated revenue could be driven by the higher levels of passenger enplanements and airline activity associated with current economic conditions. According to FAA officials, while total airport revenue has increased over this time frame, it does not necessarily mean that airports have more revenue available for new capital expenditures. For example, airport revenue is also used to pay for existing debt service and operating costs, which according to FAA officials, has also increased during this time period. Passenger Charges Overall, from fiscal years 2013 through 2017, U.S. airports collected an annual average of $3.1 billion in PFC revenue. As shown in figure 5, during this period, the annual average for PFC collections for all airports increased by 9 percent from $3 billion to $3.3 billion. Because PFCs are generated by the number of enplaned passengers, this increase was mostly driven by a 16 percent increase in passenger enplanements during this period for both smaller and larger airports. As shown in figure 5, larger airports collected most (89 percent) of the PFC revenues in fiscal years 2013 through 2017. In addition, although both larger airports and smaller airports experienced an increase in passenger enplanements in fiscal years 2013 through 2017, larger airports experienced a 10 percent increase in PFC revenue while smaller airports experienced an overall decrease in PFC revenue during this period of about 3 percent. According to FAA officials, smaller airports may have experienced an overall decrease in PFC revenues because airports’ PFC collections may cease when they have fully collected the approved amount for a project. According to FAA, this cessation is particularly true for smaller airports that do not have multiple projects for which PFC collections have been approved for a long period of time. In addition, if an airport has approved collections but one or more airlines make significant reductions in activity levels, this factor can also slow the rate of collections at airports. Larger airports hold a larger market share of flights, representing 88 percent of enplanements. Ratings agency representatives said that larger airports rely more on PFCs and bonding to fund infrastructure projects. Airport-Generated Revenue From fiscal years 2013 through 2017, U.S. airports generated an annual average of $7.7 billion in airport-generated revenue. During this period, airport-generated revenue increased 18 percent, from $7.1 billion to $8.4 billion for all airports. Overall, both larger and smaller airports generated more income over this time period, with larger airports generating substantially more revenue compared to smaller airports. Specifically, from fiscal years 2013 through 2017, larger airports generated an annual average of $7.1 billion in revenue, and smaller airports generated an annual average of $567 million in revenue. Airport-generated revenue consists of both “airside” aeronautical revenues derived from the operation and landing of aircraft, passengers, or freight, as well as “landside” non-aeronautical revenues derived from terminal concessions and parking fees. Of the $103 billion in airport- generated revenue over our 5-year time period, 54 percent came from aeronautical revenues and 46 percent came from non-aeronautical revenues (see fig. 6). Commercial service airline rates and charges— which include passenger airline’s landing fees and passenger arrival fees, rents, and utilities—made up 75 percent of the total $55.9 billion in aeronautical revenue. The remainder came from a variety of other fees and taxes paid by airlines, general aviation, the military, and other aeronautical sources. Of the non-aeronautical revenues, parking and ground transportation accounted for the greatest portion (41 percent), followed by rental cars operations revenue (19 percent). Aeronautical revenues increased by 11 percent and non-aeronautical revenues increased by 16 percent over the time period. Capital Contributions Capital contributions for airport infrastructure projects make up a small amount of funding in comparison to other sources, such as airport- generated revenue and AIP funding. These contributions—made on an individual project basis—may be provided by an airport’s sponsor (often a state or municipality) or by other sources such as an airline. According to FAA data on commercial airports’ annual financial reports for fiscal years 2013 through 2017, commercial airports received an annual average of $715 million in capital contributions. Of this amount, $471 million, or 66 percent, went to larger airports, and $244 million, or 34 percent, went to smaller airports. The amount of capital contributions varies by year and by hub size. According to FAA officials, the sources of capital contributions funding (i.e., airport sponsor, state, air carriers, or other airport users) vary depending on the type of project and funds available. Some Airports Also Received Bond Proceeds through Debt Financing for Airport Infrastructure Investments Airports can also obtain financing for airport infrastructure projects by issuing bonds. Airport bonds entail leveraging future funding to pay for projects. This financing mechanism enables airport authorities to borrow money up front to finance infrastructure projects; this money can then be paid back with interest over a longer time period. U.S. airports may qualify for tax-exempt bonds to support airport projects for federal tax purposes because the airports are owned by states, counties, cities, or public authorities. The tax-exempt status enables airports to issue bonds at lower interest rates than taxable bonds, thus reducing a project’s financing costs. FAA officials said that because airports use some PFCs and airport-generated revenue to pay off debt service, not all revenue generated from these two sources is available for additional infrastructure investment. FAA collects data in its financial reporting database of an airport’s total indebtedness. Based on our analysis of this data, from fiscal years 2013 through 2017, airports had averaged $84.6 billion in total bond debt per year. The total indebtedness measure provides an overall aggregate of the level of long-term bond debt held by airports for the year. FAA’s data do not differentiate indebtedness for each type of bond, nor do its data differentiate between existing, new, or refinanced bonds. As a result, we were not able to analyze how much airports obtained on average for new projects by issuing bonds from fiscal years 2013 through 2017. In addition, we were not able to determine whether U.S. airports borrowed increasing amounts of new bond proceeds from fiscal years 2013 through 2017 to meet infrastructure needs. Moreover, FAA does not collect data on the time frame that airports anticipate to pay back bonds, as FAA officials said that airports have the latitude to determine their own debt- payment schedules. During fiscal years 2013 through 2017, larger airports received the vast majority of bond proceeds, representing 95 percent of the total (see fig. 7). This amount includes debt from all long-term bonds. We previously reported that bond financing has traditionally been an option more commonly exercised by larger rather than smaller airports because they are more likely to have a greater and more certain revenue stream to support debt repayment. We have also reported that when smaller airports issue bonds, they make greater use of general obligation bonds that are backed by tax revenues of the airport sponsor, which is often a state or municipal government. FAA officials added that larger airports tend to issue airport revenue bonds, which are backed solely by airport revenue, while some smaller airports may be able to benefit from bond proceeds issued by the broader county or municipal government and backed by that entity's taxing authority. Projected Planned Airport-Infrastructure Costs Have Increased to an Average of $22 Billion Annually and Include More Investments in Terminal Projects We Estimated Average Annual Costs of $22 Billion for Planned Airport- Infrastructure Investments for Fiscal Years 2019 through 2023 Based on our analysis, airports’ planned infrastructure costs are projected to average $22 billion annually for fiscal years 2019 through 2023. To arrive at this estimate, we combined FAA’s $7 billion estimate of AIP- eligible planned infrastructure costs and ACI-NA’s $15 billion estimate of planned infrastructure costs for projects that are not eligible for AIP grants. Our $22 billion estimate would represent an increase of 19 percent from FAA’s and ACI-NA’s fiscal years 2017 through 2021 infrastructure cost estimates. This increase is largely driven by an increase in ACI-NA’s estimate of AIP-ineligible planned projects. Specifically, ACI-NA’s annual average of about $15 billion in planned AIP- ineligible costs reflects an increase of $3.3 billion or 28 percent when compared to the annual average estimate of AIP-ineligible projects from ACI-NA’s fiscal year 2017–2021 estimates. Similarly, FAA’s annual average of $7 billion in planned AIP-eligible costs reflects an increase of $289 million or 4 percent from FAA’s fiscal year 2017–2021 estimates. A variety of factors may be contributing to the increase in FAA’s and ACI- NA’s cost estimates, factors that we will discuss later in the report. Overall, larger airports (large and medium hub) accounted for 75 percent of the $22 billion annual cost estimate and make up a greater percentage of the estimated increase in planned development costs when comparing the fiscal years 2017 through 2021 and fiscal years 2019 through 2023 estimates. For example: Among planned AIP-eligible projects, estimated annual planned- development costs increased from $1.4 to $1.7 billion (an 18 percent increase) for large hub airports and from $641 to $735 million (a 15 percent increase) for medium hub airports, according to FAA’s cost estimates. By comparison, estimated planned development costs for small hub and non-hub airports decreased by 3 and 2 percent respectively over the same time period. Among AIP-ineligible projects, ACI-NA estimates show that annual planned development costs increased more significantly for medium hub airports. Specifically, ACI-NA’s report shows that annual planned development costs for AIP-ineligible projects increased by 22 percent for large hub airports, 71 percent for medium hub airports, and 29 percent for small hub airports. ACI-NA representatives stated that the increase in medium hub airport’s planned development (for both AIP-eligible and AIP–ineligible projects) is due to the underinvestment at medium hub airports in prior years. Specifically, ACI-NA representatives stated that in response to the loss of air service immediately following the 2007–2009 recession, some medium hub airports scaled back their capital investments. ACI-NA representatives stated that as passenger traffic has recovered with economic growth, medium hub airports are now investing in previously deferred improvements. According to ACI-NA’s report on airports’ capital development needs for 2019–2023, medium hub airports–such as Austin- Bergstrom International Airport (Austin airport), Norman Y. Mineta San Jose International Airport, and Dallas Love Field Airport–are undertaking major infrastructure improvement programs. According to officials from Austin airport, the airport recently completed a 10-year plan for its capital development program, with an estimated cost of $3.5 billion, for a new terminal, concourse, airfield improvements, runway improvements, and improved landside access. Austin airport officials stated that the airport is 20 years old and nearing the end of its lifecycle, and airport officials are trying to manage aggressive growth while rebuilding the airport. The sources of funding and types of infrastructure projects that smaller and larger airports have planned also differ. For example, smaller airports have more AIP-eligible planned costs compared to larger airports, according to FAA cost estimates. Specifically, smaller airports accounted for about $4.6 billion (or 66 percent) of AIP-eligible project costs for all airports but, according to ACI-NA cost estimates, only $878 million (6 percent) of AIP-ineligible projects. In addition, among AIP-eligible projects, while the top four types of infrastructure projects that larger and smaller airports have planned are similar (see table 1), estimated costs are more concentrated among the top two project-type categories for smaller airports. Specifically, reconstruction projects, which are projects to replace or rehabilitate airport facilities such as runways, and projects to meet FAA standards for airport design represented about 79 percent of smaller airports’ AIP-eligible estimated project costs. ACI-NA’s data do not break out AIP-ineligible project costs by project type. As a result, we were not able to determine what types of projects constitute the largest shares for AIP-ineligible project costs. However, ACI-NA does provide information about project type across all the projects in its cost estimate. According to ACI-NA’s representatives, the types of projects that are generally not funded with AIP grants that airports need to fund include landside projects, such as terminal projects; rental car and parking facility projects; concession redesign projects; and airport access projects. Total Planned Infrastructure-Project Costs Have Increased in Part due to Terminal Projects The increase in planned infrastructure costs for fiscal years 2019 through 2023 can be attributed in part to an increase in planned terminal projects during this 5-year time period. Specifically, both FAA’s and ACI-NA’s cost estimates show an increase in planned terminal projects. For example, according to FAA’s estimates of planned projects funded by AIP grants, terminal projects now represent the third largest share of total estimated costs from fiscal years 2019 through 2023 and experienced the greatest percentage increase over the previous 5-year period. As shown in table 2, overall annual average cost estimates for terminal projects increased by 51 percent between the two periods. Environmental projects was the only other project category that had significant increases (about 38 percent), while estimated costs for many other types of projects decreased. According to FAA officials, the increase in environmental projects is due to increases in environmental-related NPIAS costs (such as mitigation of development impacts and costs for environmental studies) at large and medium hub airports and additional noise mitigation at hub airports. Similarly, according to ACI-NA’s analysis, for fiscal years 2019 through 2023, terminal projects represented 53 percent of the total infrastructure- development costs among both AIP-eligible and AIP-ineligible projects. Terminal projects included terminal building projects (37 percent) and projects to provide access to the terminal (16 percent). FAA and ACI-NA representatives stated that terminal projects can be more expensive than other types of projects because of the scale of these improvements. For example, terminal projects may involve complex vertical construction, an array of special systems such as baggage and passenger screening systems, and integration of security and access control systems, all of which can contribute to the overall higher cost of these projects. In contrast, runway and airfield infrastructure generally rely on common design standards and standard construction methods according to ACI- NA representatives. Additionally, officials from most (16 out of 19) of the airports that we spoke to stated that they are planning terminal improvement projects over the next 5 years. Officials from these airports told us they are focused on making terminal improvements because existing terminals are aging and in need of repairs and to accommodate an increase in passenger enplanements due in part to airlines using larger aircraft holding more passengers. Examples of planned terminal projects at selected airports and factors contributing to these investments are below. Large hub airport terminal project. Officials from a large hub airport that we spoke to stated that they have two ongoing major terminal projects. The first project will expand and renovate the airport’s north terminal. The 468,000-square-feet facility will include a new upper- level mezzanine, seismic upgrades, and an upgraded baggage- handling system, among other improvements. According to airport officials, capacity constraints and the age of the terminal were factors for renovating the terminal. Phase 1 of the project began in February 2017 and was completed in mid-2019. As of July 2019, nine gates are operational. The second phase of construction is expected to be completed in mid- 2021. The estimated cost of the project is projected at $658 million. The airport is also developing a new international arrivals facility at its airport. According to airport officials, this facility is intended to significantly enhance the international passenger experience, and improve the arrival process for international passengers without adding new gates. Airport officials stated that the current facility is not able accommodate the city’s growing demand for international travel. The facility is estimated to cost about $968 million and is expected to open in the fall of 2020. Medium hub airport terminal project. According to officials from a medium hub airport, growth in passenger traffic is driving the need for a new terminal at that airport. International traffic at the airport has tripled between 2012 and 2017, with airlines adding three new service destinations to Europe. According to airport officials, the existing terminal will soon reach its capacity to handle international arrivals, and the first phase of the terminal project was substantially completed in 2019 and cost about $350 million. Small hub airport terminal project. Officials from a small hub airport stated that airlines have started replacing existing aircraft with larger aircraft, and this process has placed capacity constraints at their terminal. The terminal was built in 1948, and the passenger waiting area was built in the 1960s when airlines providing service to the airport were using aircraft with 100 seats. Now, however, airlines are using larger aircraft, which can accommodate up to 180 seats. Airport officials stated that they are beginning construction of a new terminal, which will expand passenger capacity at the airport. The overall estimated costs of the terminal project are $513 million, and the project is expected to be completed in 2028, pending additional funding. FAA officials and ACI-NA representatives agreed that the increased focus on terminal projects is due in part to airlines changing their business models and aircraft fleets and an increase in passenger traffic. The officials stated that as part of the industry’s fleet rationalization efforts, airlines are eliminating some smaller aircraft and replacing them with larger aircraft to increase passenger-seating capacity. FAA officials added that passenger growth at large and medium hub airports is also contributing to the increase of AIP-eligible terminal costs, as airports need to expand terminals to add capacity. According to FAA, terminal projects at large and medium hubs are generally funded through PFCs and other funding sources rather than through AIP funding. For its 2019–2023 NPIAS report, however, FAA officials said they asked airports to provide information about AIP-eligible projects regardless of whether they were planning to apply for AIP funding for the projects. According to FAA officials, this factor may also have contributed to the apparent increase in AIP-eligible terminal costs. According to FAA, another factor driving the increase in terminal costs is that seven airports have planned major terminal projects over the next 5 years. The costs of these projects are reflected in FAA’s AIP-eligible cost estimate. In addition to an increased focus on terminal projects, FAA officials, ACI- NA representatives, and selected airports cited other factors that are contributing to an increase in infrastructure costs estimates, such as increased construction costs, an overall healthier economy, increased traffic, airline consolidation, and airlines’ strategic shift to focus on hub operations. For example, according to Nashville International Airport’s officials, a growing economy has resulted in more competition for construction materials and skilled workers, competition that has increased construction costs in the Nashville area and has resulted in higher airport development costs. According to ACI-NA representatives, other larger cities such as Salt Lake City, Los Angeles, and Seattle have also reported cost escalation in their construction markets. Selected Airports Cited Challenges Related to Funding Sources, AIP Eligibility Criteria, and Competing Airport and Airline Priorities Selected Airports Stated That Insufficient Funding Is a Challenge and That They Are Taking Steps to Address These Challenges Selected Airports Stated That Planned Infrastructure Costs Exceed Current Funding Officials from most (18 out of 19) selected airports we interviewed stated that the funding and revenue available to them from existing funding sources—such as AIP grants and PFC revenues—may not be sufficient to cover the costs of future and planned infrastructure projects. For example, 14 airport officials we spoke to stated that the amount of funding that they have received in the past and that they anticipate receiving in the future from AIP formula or discretionary grants will not be sufficient to cover the costs of their future planned AIP-eligible projects. Airports may use a variety of other funding sources to pay for AIP-eligible projects. As such, differences between available AIP funding and AIP-eligible cost estimates do not necessarily reflect a funding shortfall. In addition, the NPIAS estimates represent planned AIP-eligible project costs and do not reflect actual expenditures. Below are some examples of AIP-eligible projects that airport officials stated will be a challenge to complete without additional funding: Airfield safety projects. Officials from a small hub airport stated that they have two major airfield-safety projects planned that are intended to align their airport’s current runway and taxiway to FAA safety standards. According to airport officials, their airport has been on the FAA’s top-10 list of airports with highest “incursions” for 4 consecutive years, and officials stated these airfield improvements would help them mitigate runway incursions at their airport. According to airport officials, these projects are expected to cost about $230 million, which they stated is a significant cost for an airport of their size. Their primary source of funding is AIP funding and PFC revenues; however, their current AIP formula funding and PFC revenues are not sufficient to cover the cost of the projects. Without additional funding, officials said that they will need to complete the project in phases, which could lead to a multi-year project ranging from 4 to 12 years to complete. Airport officials stated that a multi-year project of this length would significantly affect their airport operations and increase overall costs. They also stated that ideally, it would be most efficient to execute the project in fewer phases to reduce costs and to benefit airport users, as construction may negatively affect airport operations. Runway rehabilitation project. Similarly, officials from another small hub airport said their airport receives about $5 million annually in AIP formula funding, which they said is not sufficient to cover the costs of their planned runway pavement rehabilitation and reconstruction project. The total cost of the project is about $20 million. According to airport officials, if they are unable to find alternate sources of funding for the project, they will have to postpone the runway project, and such a postponement would have a significant effect on their airport operations. Runway replacement project. Officials from a medium hub airport are planning to invest in a new runway project that is expected to cost about $350 million. The existing runway is nearing the end of its useful life and needs to be replaced. They anticipate receiving approximately $4.5 million annually in AIP formula funding and plan to apply for discretionary AIP funding as well. They stated that currently, this airport’s PFC revenues have been obligated until 2032 and that therefore, they are not able to use this funding source to pay for the runway. According to airport officials, without these funding sources the airport will be required to use their existing bonding capacity to pay for this critical infrastructure, a move that would reduce their future bonding capacity for future critical infrastructure improvements. Fourteen airport officials also stated that revenue generated from PFCs is also not sufficient to cover the costs of planned infrastructure. For example, officials from one large hub airport stated that they have been successful in being able to keep up with the pace of growth at their airport, but based on their forecasts, they anticipate that they would be unable to meet infrastructure demands without an increase in PFC funding. Officials from six airports stated that because the PFC cap has remained at $4.50 since 2000 and has not been adjusted for inflation, the value of the PFC has decreased. In 2015, we reported that an inflation adjusted PFC cap would be $6.46. Representatives from eight airlines that we spoke to, however, disagree that the PFC cap should be increased citing increases in passenger traffic, increases in PFC revenues, and availability of other adequate sources of funding. According to FAA officials, increases in passenger traffic and other changes have also increased the need for capital facility investments. About Half of the Selected Airports We Spoke to Identified Challenges with Taking on Additional Debt for Infrastructure Investments Officials from about half of the airports (nine out of 19) that we spoke to— including a mix of smaller and larger airports—stated that that the revenue their airport generates from PFCs are already obligated toward current infrastructure projects, which they stated could affect their ability to use debt financing for future infrastructure projects. An additional three airports we spoke to stated that they plan to use PFC revenues to finance planned infrastructure projects and that they anticipate that these revenues will be obligated over a long term period—about 30 years— limiting their ability to use debt financing for other projects. FAA’s financial data show that airports committed a significant share of their PFCs to debt service during fiscal years 2013 through 2017. Specifically, of the $16 billion in PFC revenues (or an annual average of $3.1 billion) collected in fiscal years 2013 through 2017, airports paid a total of $12 billion for debt service (or an annual average of $2.5 billion)— which is about 78 percent of total PFC revenues generated during this time period. The debt service includes payments on new bonds, existing bonds, and refinanced bonds which, as previously noted, are collectively tracked in FAA’s database. As shown in figure 8, over our 5-year time period, larger airports accounted for the vast majority (over 90 percent) of the PFCs dedicated to debt service. According to ACI-NA’s report on airports’ capital development needs for 2019–2023 and some selected airport officials, because airports have already committed a significant portion of their current and future PFCs to servicing debt on current or completed projects, airports will have less PFC funding available for future projects. According to ACI-NA’s report on airports’ capital development needs for 2019–2023, the entire national airport system is carrying a combined debt of $91.6 billion from past projects and may be unable to pay for future needed projects unless the existing cap on PFCs is increased. Officials from three small hub airports stated that they are currently facing challenges obtaining financing for infrastructure projects, because they are already fully leveraged and have pledged their PFCs over the mid- to long-term. For example, officials from a small hub airport said that they obtained $120 million in financing, which will be carried until 2040, to build a parking garage and concourse. They said that because the airport is at capacity for debt issuance, they cannot take on any new debt for additional infrastructure projects. FAA data show that as of August 2019, 117 airports (about 30 percent) have obligated their PFCs past 2030 and that 30 airports (about 8 percent) have obligated their PFCs past 2040. One airport has obligated its PFCs through 2070. While some airports we spoke to raised concerns about being able to use debt financing for future airport-infrastructure projects, representatives from two rating agencies that we spoke to stated that for the airports they rate, the bond market is currently favorable, allowing for easier and economical access to financing. Rating representatives stated that currently, the outlook for domestic airports is either stable or positive due to the fact that airport passenger traffic growth has exceeded the gross domestic product’s growth, and airport ratings have remained consistent. For example, according to one rating agency, since 2012, its airport ratings have remained consistent and the annual airport outlook in those years has been “stable” or “stable to positive.” FAA officials added that while the perspective of rating agencies, bond insurers, and underwriters are important, a favorable credit rating does not mean that an airport should make the decision to take on additional debt. Moreover, according to FAA officials, for airports that need airline approval to issue debt, a favorable credit rating may not be sufficient to persuade the airlines of the need for the additional investment. Selected Airports Are Taking Steps to Address Funding Challenges Officials from 13 airports we spoke to stated that they are taking several actions to address funding challenges. These airport officials stated that they have deferred or delayed infrastructure investments, completed projects in phases in order to be able to fund projects in stages, or are looking for other ways to generate airport revenues from passenger services or leases. For example, officials from one airport we spoke to stated that their airport has developed a strategy of breaking up infrastructure projects into phases so as to utilize available FAA funding. According to these airport officials, this strategy lengthens the construction time and results in higher construction costs, but helps the airport to align its project needs with available FAA funding. Another airport official we spoke with said that the airport is introducing a dynamic- pricing parking program to generate additional parking revenue and that the program is expected to bring in an additional 5 to 15 percent in parking revenue. Several Airports Said Eligibility Criteria for AIP Grants Do Not Always Align with an Airport’s Priorities Officials from about half (11 out of 19) of our selected airports stated that AIP’s eligibility funding criteria are too narrow and do not allow airports to fund the infrastructure projects that they currently need, such as terminal projects. FAA’s AIP handbook provides guidance on the criteria to determine which components of a project are eligible for AIP funding. AIP-eligible projects, outlined in statute, include airport planning, airport development, noise compatibility planning, and noise compatibility projects. Certain airport projects, such as revenue-producing parking facilities, hangars, revenue portions of terminals, off-airport roads, and off-airport transit facilities are not eligible for AIP funding. Some terminal projects, however, are eligible for AIP funding, such as a terminal structure shell’s development and development of public use areas directly related to the movement of passengers and baggage in terminal facilities within an airport. This eligibility includes public use spaces that passengers may need to occupy as part of their air travel or utility support space needed to make the public space operational, including the mechanical and electrical rooms. Four airport officials we spoke to stated that they have infrastructure projects planned that are eligible for AIP discretionary funding, but that due to FAA’s criteria for AIP discretionary funding and FAA’s process for prioritizing projects, it is difficult for airports to receive discretionary funding for these projects. According to FAA officials, the eligibility criteria for AIP projects funded through entitlement and discretionary funding are the same. Discretionary funding, however, has some additional restrictions. For example, large, medium, and small hub airports are not eligible to use discretionary funding for terminal building projects. General aviation airports, however, may use discretionary funding for some airport terminal projects. In addition, unlike with entitlement funding, discretionary funding is not reimbursable and airports cannot apply for discretionary funding for projects that have already begun construction. In addition, unlike entitlement funding, not all airports receive discretionary funding. Airports must compete for the limited amount of discretionary funding available each year based on FAA’s AIP prioritization. According to FAA officials, while discretionary funding criteria do not change year to year, FAA may fund projects with discretionary funding one year, but a similar project may not receive discretionary funding a different year due to the project mix that year. FAA officials also stated that in September 2019, FAA updated its Formulation of the NPIAS and Airports Capital Improvement Plan order, which lays out the criteria and prioritization process for selecting projects for discretionary funding. According to FAA officials, projects with the highest priority include safety- and runway-related projects, such as runway signage or resolving complex geometry causing runway incursions. FAA officials stated that other projects have lower priority and ranking in the AIP discretionary-funding prioritization process. Below are examples from airport officials that stated they have certain projects planned that are eligible for AIP discretionary funding but that they believe will likely not rank high in FAA’s prioritization: Non-airfield projects. According to officials from a large hub airport we interviewed, the airport has made several investments in their airfield in the last few years and does not have any major airfield projects planned. These officials stated that they do have several non- airfield projects planned that are AIP-eligible, such as renovating gate holding areas in the terminal. However, airport officials stated that non-airfield projects do not compete well for AIP discretionary funding based on FAA’s prioritization process. As a result, they do not anticipate that they will receive AIP funding for these projects. Airfield projects. Similarly, airport officials from one medium hub airport explained that some of the airfield projects that they have planned, are eligible for AIP discretionary funding, but are not considered “high priority” projects according to FAA prioritization criteria. For example, they currently have a taxiway and apron upgrade project planned, but this project may not compete well against other projects when considering FAA’s AIP prioritization process. According to this airport official, runways are the highest priority and almost always get AIP funding. The official added, however, that the farther away you get from the runway, the less likely it is that you will be able to get AIP funding for the project. In addition, five airport officials noted that while overall AIP grant-funding levels have remained relatively constant in recent years, demand for discretionary AIP grant funding has increased, thereby increasing competition for this funding. According to FAA officials, the amount of funding that FAA has available for discretionary grants changes year-to- year. For example, the amount of discretionary funding allocated to airports annually can vary based on an airport’s decisions to carry entitlement funding over multiple years, as entitlement funding that is carried over becomes discretionary. According to FAA officials, because a very high percentage of discretionary funding comes from funding that has been carried over, it is difficult for airports to plan for or count on this funding being available in any given year. Between fiscal years 2013 through 2017, the amount of discretionary funding that was awarded averaged $1.6 billion annually. Of this amount, the amount representing “pure” discretionary funding averaged $56 million annually or about 4 percent of total AIP discretionary funding. Pure discretionary funding refers to the amount remaining after discretionary set-asides have been funded. FAA distributes pure discretionary funding to eligible projects at any airport on a competitive basis. As previously discussed, an additional $1 billion in supplemental discretionary AIP funding was appropriated in 2018, and an additional $500 million was appropriated in discretionary AIP funding in 2019. However, according to FAA officials, the number of applications they received for this funding exceeded the amount of funding that was available. Specifically, according to officials, FAA received more than 2,500 funding requests totaling more than $10 billion in 2018 for the $1 billion authorized as supplemental discretionary AIP grant funding. As of May 2019, FAA has awarded or anticipates awarding $985 million in supplemental discretionary AIP grant funding to 164 airports in 50 states, the District of Columbia, and Puerto Rico. The supplemental grants fund projects ranging from runway reconstruction and rehabilitation, to the construction of taxiways, aprons, and terminals. Competing Airport and Airline Priorities May Affect Airport Infrastructure Investments About half (12 out of 19) of the airport officials we spoke to stated that competing airport and airline priorities for capital infrastructure investments can pose challenges to funding infrastructure projects and can delay projects. For example, some of these officials stated that if an airline does not agree that there is a business case or that an infrastructure project is justified, then that lack of agreement can affect the airport’s ability to fund the project or delay the project altogether. The extent to which airlines are involved in the decision-making of airport infrastructure investments varies by airport and depends on the type of “use-and-lease” agreement between the airport and the airline. These agreements set forth the terms and conditions for establishing airline rates and charges and investing in capital improvements. Some agreements have a “majority-in-interest” (MII) provision, which requires airports to obtain airlines’ approval for certain infrastructure investments. One large hub airport stated that they have an MII agreement, requiring airlines’ approval of certain projects and project financing strategies. They further explained that debt financing would affect their airline rates and charges and would therefore require the airport to obtain approval from airlines before using general airport-revenue bonds on a project. While airport officials would like to add more gates to the airport and finance that project with general airport revenue bonds, these officials stated that some airlines may not support unassigned gate additions because it could increase competition. According to FAA officials we spoke with, some airports have been able to move toward shorter-term agreements with greater flexibility to adapt to changing needs; however, many agreements still include some form of MII provisions. According to officials from four smaller airports, airlines are less likely to support infrastructure-related increases in airline rates and charges at smaller airports than at larger airports. For example, a non-hub airport stated that smaller airports have a more difficult time negotiating higher rates and charges with airlines because of competition from other nearby airports. ACI-NA representatives also stated that medium hub airports that are not connecting hubs for the three large U.S. network airlines have less of an opportunity to receive capital investments from network airlines compared to larger airports. Representatives from all eight airlines that we spoke to stated that the types of airport infrastructure projects that they see a need for are demand-driven infrastructure development projects that expand airfield capacity, increase the number of gates at an airport, or address safety. Of these airlines, six also stated that they see a need for infrastructure development at larger airports in particular. For example, representatives from one airline stated that they want to collaborate with airports on capital development projects that are scalable and where passenger enplanements are increasing. In addition, representatives from five airlines that we spoke to said that they would like to have more input on airport infrastructure investment decisions. In addition, representatives from five airlines raised concerns that airlines do not have a role in decisions on how airports can invest PFC revenues. According to our prior work, PFCs provide airports a source of funding for airport development over which they have greater local control and airlines have more limited say in their use as compared to the use of airport terminal rents or landing fees. In addition, representatives from two airlines we spoke to said that FAA exercises limited oversight of infrastructure projects funded by PFCs, and that this limited oversight results in airports’ using PFC funding for projects that airlines do not see a need for. The representatives stated that FAA largely approves most PFC applications for projects and that they believe FAA should do more to ensure that airports are not using PFC revenues for unnecessary capital development not supported by airlines. For example, one airline objected to the use of over $1.5 billion of PFC funds for the multi-phase construction of the Phoenix Sky Train linking light rail, parking, and terminals, as representatives believed that there was not an adequate business case to justify the construction of the Sky Train. According to these airline officials, because the airport used PFC revenues for the project, other necessary terminal improvements have been largely debt funded. According to FAA officials, when reviewing PFC applications, they assess the extent to which the airport has demonstrated a need for the project. FAA officials stated that airports are familiar with FAA criteria and will generally not submit projects that will not meet the criteria and that could be denied. In addition, FAA officials stated that while it is unusual for FAA to deny an application, they have denied individual projects. Agency Comments We provided a draft of this report to the Department of Transportation (DOT) for review and comment. DOT provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, DOT, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Ownership and Infrastructure Funding and Financing of Foreign Airports Foreign Airports’ Ownership Models and Primary Infrastructure- Funding Sources Differ from U.S. Airports More Foreign Airports Are Privately Owned or Operated Compared to U.S. Airports Traditionally, airports around the world were primarily owned and managed by national governments, but that has changed over time. Beginning in the 1980s and through the 1990s, governments outside of the United States began shifting toward privatization and deregulation of airports. According to the 2016 Airports Council International - World’s (ACI-World) inventory of privatized airports worldwide, 614 commercial service airports (14 percent) have private sector participation. Although ACI-World estimates that a majority (86 percent) of the 4,300 airports with scheduled traffic around the world are publicly owned by a government or government entity, airports with private sector participation handle over 40 percent of all global air traffic. Today, there is a range of airports’ ownership and operating models. Through a literature review of ACI-World’s, the Airports Council International - EUROPE’s (ACI- EUROPE), the International Civil Aviation Organization’s, and the International Air Transport Association’s reports and other documents, we identified five general types of airport ownership structures outside of the United States: Government owned and operated: The airport is fully owned and operated by a public authority or by a mixture of public authorities at a local, regional, national, or transnational level. Government owned and privately operated: The airport is government owned but the airport operator—considered as the entity that is responsible for the day-to-day operation of airport services and facilities—is a private company. Partially privatized: The airport is partially privatized (e.g., mixed public-private ownership), meaning the airports’ shares are owned by a combination of private investor(s) and public authorities of the country where the airport is located. Fully privatized: The airport is fully owned and operated by a commercial company wholly owned by private individuals or enterprises. Not-for-profit, private corporation: The airport has been transferred to or leased by a not-for-profit corporation. The not-for-profit corporation is expected to be financially self-sufficient and fully responsible for funding all operating and infrastructure costs. While U.S. airports are predominantly publicly owned and operated, private participation, like private ownership or private operation contracts, is more common at airports in other countries. Airport Ownership in the United States In the United States, nearly all of the 3,330 commercial-service or general-aviation airports, designated as part of the national airport system, are publicly owned by local and state governments, regional airport authorities, or port authorities. Airport ownership in the United States has evolved under a public model since the 1920s as a way to promote the development of the U.S. aviation industry. In 1996, the Federal Aviation Reauthorization Act of 1996 established the Airport Privatization Pilot Program, which reduced some of the barriers to privatizing airports, and allowed for commercial service airports to be leased and for general aviation airports to be sold or leased. However, as we have previously reported, 18 years following the program’s inception, two airports have privatized, with one of these airports reverting to public control. While participation in the Airport Privatization Pilot Program has been very limited, some airports have entered into public-private partnerships with private entities through management contracts for terminals, which may be leased or outsourced to airlines or other contractors, or for food, rental car, and other concession agreements. For example, the Paine Field Snohomish County Airport in Washington, previously a general aviation airport, entered into a ground-lease agreement with a private airport developer—Propeller Airports—to build and operate a small passenger terminal for commercial service. The terminal was open for commercial service in March 2019, and is depicted in figure 9. Propeller Airports is responsible for the landside infrastructure investments and terminal maintenance. Snohomish County is responsible for maintaining and operating the airside infrastructure, which includes the runways and taxiways, but leases the aprons and the terminal land to Propeller Airports. Airport Ownership in Other Countries Privatized airports are more prevalent in foreign countries. According to a 2016 report by ACI-EUROPE, which examined ownership structures of airports across Europe, about 41 percent of European airports are fully privately owned or partially privatized. According to ACI-World, 75 percent of airports with passenger traffic in Europe have private sector involvement through fully privatized airports or public-private partnerships. Latin America-Caribbean airports (60 percent) and Asian airports (45 percent) have the second and third highest private sector involvement. Industry stakeholders we interviewed said that in some Asian countries, such as Japan and Singapore, airports that were previously government owned have already privatized or are transitioning to privatization. In addition, while ownership models can vary by country, they can also vary within a country. For example, according to ACI-EUROPE’s 2016 report, the United Kingdom’s airports are 53 percent fully private, 26 percent partially privatized, and 21 percent fully public. As we have previously reported, different airport ownership structures, motivations, and financing have driven airport privatization in other countries. For example, in several countries, the national government built, owned, and operated the country’s airports prior to privatization. We previously reported that national ownership enables a central government to direct the sale of its airports and can make for a more streamlined privatization transaction, reducing transaction costs for both the public- sector owner and private-sector bidders. Foreign governments may also be more motivated to privatize their airports than U.S. public-sector airport owners. According to the International Civil Aviation Organization, foreign governments’ reasons for privatizing their airports vary, including an identified need for private-sector capital investments in existing or new airports and a national move toward privatization of public assets or companies. We have previously reported that airports in other countries often have less access to public funds or tax-exempt bonds than publicly owned and operated U.S. airports, making them more reliant on private financing for airport improvements. Our prior work found that a key factor that can hinder U.S. airport privatization is the loss of some federal AIP funds and the loss of easy access to tax-exempt financing. Selected Foreign Airports Generally Do Not Rely on Government Funding for Infrastructure Projects Most of the five foreign airports we selected for our review do not receive government funding. We selected and reviewed five airports in other countries that represent each type of ownership structure previously discussed. Representatives from our five selected foreign airports all said that they rely on aeronautical revenue, which includes revenue from passenger charges and airline rates and charges as the primary source for capital development. Representatives from four of our selected foreign airports said that they rely on debt financing for infrastructure funding as well. Representatives from only one selected airport, Changi Airport in Singapore, said that they have received government funding for infrastructure projects. Table 3 below summarizes the main sources of infrastructure funding available to these selected airports. Aeronautical Revenue Airline Rates and Charges Representatives from the selected airports we interviewed said that they generate infrastructure funding from various sources of aeronautical revenue, including airline rates and charges. Some foreign airport representatives told us that revenue from airline rates and charges are not required to be used for aeronautical-related costs or infrastructure, or within the airport. Some airports, such as Helsinki Airport, may operate within a consortium network, where revenue is shared among all airports in the network to cover costs. Additionally, some airports have regulations for setting airline rates and charges. For example, the Civil Aviation Authority in the United Kingdom regulates Heathrow Airport’s airline rates and charges. Selected airport representatives we spoke with said that they consult airlines when adjusting airline rates and charges. For example, the Helsinki Airport official said that the airport updates its airline charges once a year and that airlines have an opportunity for the airlines to appeal the change. Representatives from the International Air Transport Association and the Steer Group Inc. said that some foreign airports may have higher airline rates and charges compared to some airports in the United States due to several factors, including the need to generate returns for private financing and flexibility in setting rates and charges, as outlined below. Generating returns for private financing. Foreign airports with private investment or financing may have higher rates because they need to generate returns to pay back private financing. Privately owned airports may also be under pressure to generate returns for investors and therefore need to further divert revenue from funding infrastructure. Flexibility in setting rates and charges. Foreign airports generally have greater flexibility to set airline charges to meet airport needs, a flexibility that may result in higher rates and charges. For example, Canadian airports are generally able to set and adjust airline and passenger charges as needed, and charges vary by airport. In Singapore, Changi Airport has a passenger charge and a pre-funding levy for its new terminal project. Airports in the United Kingdom, including Heathrow Airport, have a regulator that sets the airline and passenger charge cap, and adjusts it every 2 years. In addition, foreign airports have limited airline input on determining airport capital investments and fees charged to airlines. For example, according to ACI-WORLD, airports consult airlines on airport charges and on capital developments, but airport proposals can usually be implemented even if airlines do not support them, as long as a due and proper consultation process is held. An international airline stakeholder said that the extent of airline input on airport capital investment and fees charged to airlines is dependent on the country’s specific regulatory model and the willingness of the airport operator to consult with airlines, but that in some countries, airline consultation is limited. Representatives from our selected foreign airports said they generally keep airlines informed. For example, the Toronto Pearson International Airport has a consultative committee approach with airlines on larger projects costing over $50 million. If the airlines do not approve a project through the consultative committee, the project must be put on hold for one year before it can proceed. Passenger Charges Other sources of aeronautical revenue include passenger charges. As of October 2019, for the foreign airports we reviewed, passenger charges ranged from the U.S. dollar equivalent of $9.65 to $58.58 per local traffic passenger (see table 4). Industry stakeholders and international airport association stakeholders said that U.S. airports have a unique ownership and funding model compared to foreign airports. U.S. airports have an element of public control of funding through the federal Airport Improvement Program (AIP) grants and passenger facility charges (PFC), as projects funded through these sources must receive approval from the Federal Aviation Administration. According to these stakeholders, U.S. airports are subject to different regulations related to setting passenger charges. As a result, we have determined that the comparability of these charges is limited. In addition, differences in ownership models, private investment, and funding between U.S. and foreign airports also limited the comparability of these charges. Table 4 provides an overview of passenger charges and levies at selected airports in other countries. Selected foreign airports adjust passengers’ charges based on the airport’s building and infrastructure needs and the cost imposed by passengers on the airport system. How and when these airports make adjustments varies. For example, one of our selected airports has a government entity that regulates passenger charges. More specifically, the Civil Aviation Authority in the United Kingdom regulates Heathrow Airport’s passenger charges. Every 5 years, the Civil Aviation Authority determines the maximum amount that the airport can charge based on the costs incurred by the airport. Other selected airports consider adjustments on an “as needed” basis, including the Toronto Pearson International Airport. Representatives from the Toronto Pearson International Airport said that they set and adjust passenger charges as needed to fund infrastructure investments. The airport assesses charges annually and only adjusts the passenger charges if there is a material imbalance between required cost recoveries against charges. Airport officials also stated the airport increases airline rates and passenger charges only when needed to generate sufficient revenue to cover the costs of planned infrastructure. Similar to airline rates and charges, selected foreign airport representatives told us that there generally are no restrictions on how the airports use revenue from passenger charges for infrastructure or operational costs. Industry stakeholders said that some airports, such as Heathrow Airport, do not have revenue diversion limitations, so revenue generated from passenger charges at the airport is not required to be reinvested back into the airport. Comparatively in the United States, airport revenue is regulated and generally speaking, revenue generated by the airport must go toward certain costs at the airport. Most of our selected foreign airport representatives (4 out of 5) also said that they rely on debt financing, through private bonds or commercial loans. Industry stakeholders said that airport debt financing internationally is similar to that in the United States, but foreign airports generally do not have access to the municipal bond market. Airports’ bonds are generally tax exempt in the United States. Representatives of our selected foreign airports said that they use various types of debt financing, including commercial loans from financial institutions; equity or debt financing, such as bonds in commercial capital markets; or loans from private investors. Most of our selected foreign airports (3 out of 5) do not receive government funding. International airport associations said that the extent to which an airport receives government funding may depend on whether the government owns the airport or has a role in operating the airport. For example, Changi Airport officials said that the Singaporean government is providing Changi’s government-owned, privately operated airport an unspecified amount of government funding for their new Terminal 5 project. In another example, Toronto Pearson International Airport does not receive government funding; however, in Canada, small or rural airports can receive some funding from the Canadian Airports Capital Assistance Program. Similarly, Finavia officials said that although the Helsinki Airport is publicly owned and operated, it does not receive any government funding. To provide information about how each of the five, selected foreign airports fund and finance infrastructure projects, we developed the following case studies. These airports were selected based on selection criteria of ownership models and passenger traffic. The case studies provide information on main sources of funding and financing for the airports’ infrastructure developments, factors considered when setting airline and passenger charges, coordination with airlines on capital development, and recent and planned infrastructure investments for each selected foreign airport. The majority of airports in Finland are owned and operated by the government-owned company Finavia Corporation (Finavia), a limited liability company wholly owned by the Finnish government. Specifically, Finavia operates a network of 21 Finnish airports, of which 19 offer commercial service and two are military airports. Of the remaining three airports in Finland, two are owned by local municipalities, and one is privately held. Background Helsinki Airport is owned and operated by a government-owned company, Finavia. Of Finavia’s airports, according to the Finavia representative, Helsinki Airport has the most connecting international flights and passenger boardings. For example, Helsinki Airport provides direct service to 162 international destinations, including 22 direct flights to Asia. Helsinki Airport has experienced strong passenger growth in recent years. In 2018, Helsinki Airport had 21-million passenger boardings, an increase from the prior year of about 10 percent. Most of this increase was attributable to international traffic. The Finavia representative said that Finavia anticipates passenger traffic to slow in 2020, due to an anticipated slowdown in Europe’s economic growth. Main Sources of Funding and Financing for Airport Infrastructure Investments According to the Finavia representative, Helsinki Airport’s main sources of funding for infrastructure improvements are (1) airline rates and charges, (2) passenger charges, (3) other airport-generated revenue, and (4) debt financing. Helsinki Airport collects aeronautical revenue from airline rates and charges and passenger charges directly from the airlines. Helsinki Airport does not receive any public or government funding, despite being government owned, and the airport does not have any public-private partnerships. In 2010, Finavia began operating as a limited liability company, rather than a government agency. The Finnish government corporatized Finavia to align with the European Union (EU) principles on EU services, movement of services, and competition. The Finavia representative said that the change in corporate structure helps ensure that the government is not subsidizing or promoting unfair competition practices. Airline rates and charges: Helsinki Airport generates revenue from air carrier and other aircraft operator rates and charges such as landing, aircraft parking, and electricity charges. In 2019, Finavia raised airline charges by 2.1 percent from 2018 levels, prompted by higher service costs resulting from airport investments. The Finavia representative said that airline rates and charges make up approximately 40 percent of the airport’s total aeronautical revenue. Under the airport network approach, Finavia can offset losses at one airport with revenue from a more successful airport. The Finavia representative said that some airports in the network are self- sustaining and generate sufficient revenue to cover the costs of airport operations; other network airports do not. According to the Finavia representative, Finavia applies uniform airport charges within the airport network to recover operational and infrastructure costs Passenger charges: Helsinki Airport collects a passenger charge from airlines in order to fund infrastructure used for servicing the passengers. As of January 2019, Helsinki Airport has a euro (€) 8.60 (U.S. dollar (USD) $9.65) fee per departing passenger and a across the airport network and to comply with EU directives on airport charges. €4.10 (USD $4.60) fee per transferring passenger. The Finavia representative said that passenger fees make up approximately 60 percent of the airport’s total aeronautical revenue, which include both airline and passenger fees. According to the Finavia representative, Helsinki Airport does not designate revenue from airline and passenger charges for a specific use. Revenue from airline and passenger charges has been used to cover costs from providing services and operations within the Finavia network. According to the Finavia representative, aeronautical charges, including airline rates and charges and passenger charges, are evaluated and updated once a year and Finavia sets the same charges for all airports in the Finavia airport network. Other airport-generated revenue: Helsinki Airport also generates non-aeronautical revenues from sources such as concessions, commercial services at terminals, parking services, security control, and rental income from real estate. Debt financing: Helsinki Airport uses debt financing from a variety sources, including private banks, financial institutions, and public sector sources such as the European Investment Bank, a financing institution financed by the European Union, and the Nordic Investment Bank. The financing that Helsinki Airport has obtained is similar to traditional debt financing. According to the Finavia representative, Helsinki Airport does not have any restrictions or legal requirements on the types of loans that the airport can take on, nor does Finavia pledge revenue from any specific source towards the repayment of loans. However, the Finavia representative stated that Finavia does not issue bonds. The representative said that generally, the airport has relied on traditional lending because it is easier to obtain and repay a bank loan as compared to other types of debt. Factors Considered when Setting Airline and Passenger Charges The representative said Finavia considers several factors when setting airline and passenger charges. The Finnish Act on the Airport and Network and Airport Charges requires that the pricing of airport charges within the airport network are uniform, common, and transparent, based on the service level offered, and are applied on non-discriminatory and equal grounds. Finavia therefore considers the Finavia airport network revenue; the cost of providing aeronautical services (including operational and electricity costs); and the costs of capital for infrastructure investment when setting the airport’s airline rates and charges. According to the Finavia representative, Helsinki Airport also considers the airport market to ensure that its airline and passenger fees are competitive with similar airports in other European countries. When Finavia makes changes to its airline or passenger charges, the Finavia representative said that airlines have an opportunity to appeal the change. The Finnish Transport and Communications Agency acts as an independent supervisory authority to process disagreements on airport charges. Coordination with Airlines on Capital Development As part of the capital development process, Finavia must consult with airlines to seek input on planned capital investments at the airport before the airport carries out any major new infrastructure projects. Finavia organizes these discussions to assist with negotiations, but the Finavia representative said these discussions are specific to the individual airport rather than the overall Finavia network. In addition, according to the Finavia representative, when setting airline and passenger charges, Finavia consults with airlines and provides information about how airport charges relate to the facilities and services at the airport. According to the Finavia representative, the Helsinki Airport development program, initiated in 2014 with a 2030 anticipated completion date, is the largest expansion project in the airport’s history. It will expand Helsinki Airport’s capacity and increase the number of gates. For example, the airport has planned a terminal building project that will expand the terminal by 45 percent and double the number of gates for wide-body aircraft from eight to 16 gates. In 2016, as seen in figure 10, Helsinki Airport opened one of the passenger terminal expansions, which added 12 new departure gates to the airport. On the airside, the airport will also renovate the apron area to accommodate large aircraft. Additionally, Helsinki Airport is working on a project to improve luggage and baggage handling capabilities to accommodate the anticipated increase in baggage volume expected from airlines’ use of larger aircraft. According to the Finavia representative, Helsinki Airport planned these capital improvements in response to expected passenger traffic growth. The representative anticipates that between 2025 and 2030, annual passenger boardings at Helsinki Airport will reach 30 million. A rendering of the entrance to Helsinki Airport’s completed terminal expansion is shown in figure 11, below. Finavia will use airport cash flows from passenger fees, aeronautical revenue, and non-aeronautical revenue to fund the infrastructure projects. Finavia estimates that the total cost of the Helsinki Airport infrastructure expansion will be €1 billion (USD $1.1 billion). Airports in Singapore • Passenger traffic: 67 million Singapore has two airports that provide commercial service—Changi International Airport (Changi Airport) and Seletar Airport, which is a smaller airport that provides commercial and general aviation service. Background Changi Airport is the primary commercial airport in Singapore, located off the eastern coast of the country. Changi Airport was built in 1981, and according to ACI-World, was the world’s 19th busiest airport in terms of passenger boardings in 2018. While Changi Airport is government owned, the airport is operated by the Changi Airport Group—a private limited company. The Changi Airport Group is responsible for the airport’s operations and management, air hub development, commercial activities, and airport emergency services. It is also responsible for maintaining and investing in airport infrastructure and ensuring the airport is financially self-sustaining. Both airports are owned by the Singapore Ministry of Finance and operated by the Changi Airport Group. The Singapore Ministry of Finance does not have a role in the daily operations and management of the airports but reviews the types of planned airport infrastructure investments. The Changi Airport Group’s board of directors is made up of two representatives from the Singapore Ministry of Finance and other board members from the private sector. The board has discretion to design, budget, and build infrastructure projects. Changi Airport is a major hub for the region, and according to the Changi Airport Group representative, passenger boardings have been increasing steadily. For example, from 2005 to 2018, boardings increased by 30 percent. In 2018, the airport had 66.6-million boardings, an increase of about 5.5 percent from the prior year. The Changi Airport representative said that the airport is currently operating at 85 percent capacity for passenger boardings, but anticipates reaching 100 percent capacity by approximately 2026–2027. The airport has made significant investments to enhance the passenger experience at the airport. For example, the airport has enhanced terminal features for passengers, including a butterfly garden, indoor waterfalls, a four-story slide, 19 airport lounges, and luxury shopping (see fig. 12). The 2019 World Airport Awards named Changi Airport the World's Best Airport for the seventh consecutive year. From 1984 until 2009, Singapore’s airports were owned by the Singaporean government, and operated by the Civil Aviation Authority of Singapore, under the Ministry of Transport. In 2009, the airports were corporatized, and the Changi Airport Group took over airport operations and management. Through two companies, Temasek Holdings and GIC Private Limited, the Ministry of Transport owns and invests in companies that serve strategic national interests, such as infrastructure. For example, according to the Changi Airport representative, Temasek has a 50 percent stake in much of Singapore’s major infrastructure, including a 54 percent stake in Singapore Airlines, the country’s national carrier. The Civil Aviation Authority of Singapore continues to economically regulate the Changi Airport, promote the growth of the air hub and aviation industry in Singapore, oversee and promote aviation safety, and provide air navigation services. Frankfurt Airport’s Case Study Passenger traffic: 222 million Number of airports: 36 commercial airports which includes 16 international airports and 20 regional airports with scheduled passenger service. Background Frankfurt Airport began operations in 1936. According to Fraport AG’s 2018 annual report, in fiscal year 2018, Frankfurt Airport was the largest commercial service airport in Germany and the fourth largest commercial service airport in Europe. The airport is partially privatized and is owned and operated by Fraport AG. Frankfurt Airport was previously jointly owned by the federal government, the State of Hesse, and the City of Frankfurt. In June 2001, Frankfurt Airport was partially privatized, with private entities acquiring a minority ownership stake in the airport. Currently, the State of Hesse and City of Frankfurt own about 51 percent of the airport, with the remaining, about 49 percent, held by private entities. Until the 1980s, airports in Germany were traditionally owned and operated by the government. Following the 1982 creation of a federal program to privatize airports, several airports were partially privatized. According to an Airports Council International-EUROPE survey conducted in 2015, there are now two different airport ownership structures in Germany. Passenger traffic at Frankfurt Airport has increased over the last few years. According to Fraport AG’s 2018 annual report, Frankfurt Airport reached 69.5 million passengers in 2018—an increase of 5 million passenger or about 8 percent over the prior year. Partially privatized: about 47 percent of airports in Germany are partially owned by local, regional, or federal governments. Main Sources of Funding and Financing for Airport Infrastructure Investments Frankfurt Airport’s main source of funding for capital improvements are (1) airline rates and charges, (2) passenger charges, (3) other airport- generated revenue, and (4) debt financing. Airline rates and charges: Frankfurt Airport collects revenue from Fully government owned: about 53 percent of airports in Germany are owned by a public authority, or by a mixture of public authorities, at a local, regional, national, or transnational level. airline rates and charges paid by airlines servicing Frankfurt Airport. These charges include airline takeoff and landing, noise, parking, and other charges. Under German law, airports must obtain approval for certain airline rates and charges from the regional aviation authority, including airline takeoff and landing charges, noise charges, aircraft movement area charges, and parking charges. The only airport charges not subject to approval are charges for central ground-service infrastructure facilities and ground service charges. The regional aviation authority responsible for Frankfurt Airport is the Ministry of Economics, Energy, Transport and Regional Development, State of Hesse. In addition, the Airports Council International (ACI)- EUROPE’s representatives said that the majority of airports in Europe with commercial service, including Frankfurt airport, offer discount incentives to airlines in exchange for delivering higher volumes of passengers. Passenger charges: Frankfurt Airport has passenger charges that vary depending on the destination of the passenger’s flight. As with airline rates and charges, airports must also obtain approval for passenger charges from the regional aviation authority. For example, as of January 1, 2019, these charges range from euro (€)12,93 (U.S. dollar (USD) $14.51) for transfer flights to all destinations to €25,16 (USD $28.23) for international flights initiating from Frankfurt Airport. Other airport-generated revenue: Frankfurt Airport also generates revenue from airport concessions, real estate leases, parking, and other sources. Debt financing: Frankfurt Airport also relies on debt financing to fund infrastructure projects. However, we were unable to receive data from Fraport AG on how much debt financing Frankfurt airport used for capital development projects in 2018. We were not able to confirm financial information with Fraport AG about how much total revenue Frankfurt Airport generated from each of the individual sources described above. Therefore, we are not able to provide information on the total revenue generated by Frankfort Airport in 2018. However, information is available on the total revenue for all airports in the Fraport AG network. Specifically, according to Fraport AG’s 2018 annual report, the total revenue generated from approved airline rates and charges, passenger charges, and passenger services combined for the full Fraport AG group was €1,006 million (USD $1.2 million). In addition, the total revenue generated from other airport-generated revenue for the full Fraport AG group was €507 million (USD $599 million) in 2018. Fraport AG is in the process of building a new terminal—Terminal 3—at Frankfurt Airport to provide sufficient capacity and accommodate growing air traffic at Frankfurt airport. Construction for the project began in 2015 and is estimated to be completed in 2023. The first phase of the project involves construction of the main terminal building, which will include the arrival and departure levels, lounges, concession area, and a baggage handling system. This phase of the project is expected to provide capacity for about 14-million passengers a year. The second phase of the project will expand the airport facility and is expected to increase passenger capacity by up to 5-million additional passengers when completed in 2021. According to Fraport AG’s current plans, the new terminal is expected to increase capacity by up to 21 million more passengers. Fraport Ausbau Süd GmbH, a wholly owned subsidiary of Fraport AG, is responsible for managing, supervising, and monitoring the construction project. The project is being privately financed, and the estimated budget of the project is about €3.5 billion to €4 billion (USD $4.1 billion to $4.7 billion). According to Fraport AG, this project is Fraport’s largest single investment at Frankfurt Airport. We were unable to confirm information with Fraport AG representatives about factors they consider when setting airline and passenger fees or how they coordinate with airlines on the airport’s infrastructure development. Background Heathrow Airport is Europe’s busiest airport with the highest passenger boardings, and is the United Kingdom’s hub airport. Heathrow Airport has undergone transformation from a government-owned airport to a privately-owned airport. Heathrow Airport was privatized in 1987 as part of the privatization of the British Airports Authority. Currently, Heathrow Airport Holdings Limited owns and operates Heathrow Airport. In 1965, the Airports Authority Act established the British Airports Authority, an independent government agency, which assumed ownership and management of airports in the United Kingdom. Between 1966 and1987, the British Airports Authority acquired ownership and operation of seven of the 22 government airports— Heathrow, Stansted, Prestwick, Gatwick, Edinburgh, Aberdeen, and Glasgow airports. Although Heathrow is privatized, any airline and passenger charges the airport collects are subject to economic regulation by the U.K.’s Civil Aviation Authority. The Civil Aviation Authority—a government agency— regulates airport charges for U.K. airports with more than 5-million annual passengers. Airports Council International (ACI)-EUROPE representatives said that the Civil Aviation Authority regulates Heathrow on the basis that Heathrow is likely to possess significant market power for aeronautical services. In 1987, the United Kingdom privatized the British Airports Authority due to limited government funding and a need for significant capital development at large airports, according to Heathrow Airport representatives and industry stakeholders. All seven airports owned by the authority were privatized. The authority was subsequently acquired in 2006 by an international consortium led by Ferrovial Aeropuertos S.A. of Spain (Ferrovial S.A.) and named BAA Ltd. This entity was later renamed Heathrow Airport Holdings Limited. The United Kingdom became the first country to privatize its major airports. According to an Airports Council International-EUROPE survey conducted in 2015, airports in the United Kingdom have one of the following three ownership structures: Government owned: about 21 percent of airports in the United Kingdom are owned by local, regional, or national governments; experienced increased passenger numbers as a result of airlines’ use of larger aircraft that have more seats per aircraft. Main Sources of Funding and Financing for Airport Infrastructure Investments Heathrow Airport’s main sources of funding for capital improvements are (1) airline rates and charges, (2) passenger charges, (3) other airport- generated revenue, and (4) debt financing. Airline rates and charges: Heathrow Airport collects revenue from Fully privatized: about 53 percent of airports in the United Kingdom are owned by private entities. charges that it imposes on airlines that fly to and from Heathrow Airport. These charges include landing, parking, and emissions charges. Under the authority of the Civil Aviation Act of 2012, the Civil Aviation Authority establishes a pricing formula known as the “maximum revenue yield,” which sets limits on the airline and passenger charges on a per-passenger basis. In 2018, Heathrow Airport generated pounds (£) 549 million (U.S. dollar (USD) $734 million) in landing and parking charges, according to Heathrow Airport’s 2018 financial statements. Passenger charges: Heathrow Airport has several categories of passenger charges, which vary in rates depending upon the time of year of travel; whether the passenger is on a departing, transfer, or transit flight; or whether the flight destination is inside or outside of the European Union. For example, under the 2019 charges for Heathrow Airport, the passenger service charge would range from £19.84 to £46.02 (USD $25.25 to USD $58.58). In 2018, Heathrow Airport generated £1.2 billion (USD $1.6 billion) in revenue from passenger charges, according to Heathrow Airport’s 2018 financial statements. Other airport-generated revenue: Heathrow Airport also generates other revenue from retail airport concessions, parking, and other sources. Heathrow Airport generated £656 million (USD $876 million) from these sources in 2018, according to Heathrow Airport representatives. Debt financing: Heathrow airport also relies on debt financing to fund infrastructure projects. In 2018, Heathrow (SP) Limited raised approximately £2.3 billion (USD $3.1 billion) of debt financing to fund infrastructure projects. According to Heathrow Airport representatives, as of 2018, the airport has a total debt of £12 billion (USD $16 billion), which includes shareholders’ indebted equity. According to Heathrow Airport representatives, Heathrow Airport’s largest source of funding is from airline rates and charges and passenger charges, and in 2018 the airport generated £1.7 billion (USD $2.3 billion) from airline and passenger charges combined. Factors Considered when Setting Airline and Passenger Charges As previously discussed, the Civil Aviation Authority is responsible for economic regulation of Heathrow and other airports in the United Kingdom. Specifically, it regulates airline and passenger charges and determines the maximum amount in fees that Heathrow Airport can charge airlines and passengers on a 5-year basis, with adjustments every 2 years as needed. The level of airport charges that Heathrow levies each year is in accordance with the aviation authority’s pricing formula. Each year, Heathrow Airport publishes Conditions of Use that describes its airport charges. According to Heathrow Airport representatives, they have flexibility in how they categorize charges, but the charges must align with the European Union’s and United Kingdom’s non-discrimination principle standards and with the Civil Aviation Authority’s regulations. According to Heathrow Airport representatives, they consider several factors, such as the infrastructure needs at the airport and the real cost of providing services, when setting airport charges. They also set charges to influence and incentivize airline behavior. For example, to incentivize airlines to replace aircraft with newer, less polluting models, the airport charges airlines a higher fee per landing when they use older aircraft. In addition, Heathrow’s passenger fees vary depending on the passenger’s anticipated airport use and with the costs imposed on the airport system. For example, passengers on domestic flights have lower charges than passengers traveling on international flights. This differential is because domestic passengers do not use the same facilities or the same baggage facilities as an international passenger and the costs of those facilities are higher than for facilities serving domestic passengers. Coordination with Airlines on Capital Development Heathrow Airport coordinates with airlines on capital development. For example, the airport organized an Airport Consultative Committee structure to obtain input on its most recent capital development plan from the 93 airlines operating at the airport. According to representatives from the International Air Transport Association, which is an association that represents airlines, the airport used this committee to reach agreement with these airlines on a capital expenditure plan related to development at multiple terminals at the airport. Recent and Planned Infrastructure Investments at Heathrow Airport According to Heathrow Airport representatives, within the last 15 years, Heathrow Airport has completed two large capital-development projects, and the airport is currently in a planning phase. In 2008, Heathrow Airport opened Terminal 5, which had a total project cost of £4.3 billion (USD $8 billion). Subsequently, in 2014, Heathrow Airport renovated its passenger terminal—Terminal 2—which cost approximately £2.5 billion (USD $4.1 billion) to complete. Planning and design is now under way for the construction of a third lateral runway and an associated new terminal facility at Heathrow Airport, according to Heathrow Airport representatives (see fig. 13). The new runway is intended to alleviate constraints on the number of available slots for landing and takeoff. According to Heathrow Airport representatives, the new runway is expected to add capacity for at least an additional 260,000 flights per year, and the overall project will expand the airport’s surface space by 50 percent. Representatives said that according to current plans, construction of the runway and associated terminal is expected to begin in 2022 and operations are expected to start in 2027. The runway project is estimated to cost £14 billion (USD $18 billon) and will be funded through cash flows from operations, equity, and debt, according to Heathrow Airport representatives. Airports in Canada • Passengers traffic: 159 million Ownership Structure of Airports in Canada Background The Greater Toronto Airports Authority manages and operates the Toronto Pearson International Airport (Toronto Pearson). According to Statistics Canada passenger traffic data, Toronto Pearson is Canada’s busiest airport in terms of total passenger traffic. In addition, it is North America’s second busiest airport in terms of international traffic, according to Toronto Pearson’s 2018 annual report. The Greater Toronto Airports Authority is a not-for-profit corporation without share capital, meaning it does not have any shareholders and any profits earned are invested back into the airport. Until the early 1990s, the Canadian federal government owned, operated, and maintained most airports and air navigation facilities in Canada. In 1994, the Canadian federal government issued the National Airports Policy, which created different ownership structures for NAS and non-NAS airports. The Greater Toronto Airports Authority assumed operations and management of Toronto Pearson in 1996 through a lease arrangement with the federal government. According to representatives from the airports authority, because Toronto Pearson generates the most revenues among Canadian airports, the authority pays the highest ground lease rate for Toronto Pearson among Canadian airports. For every Canadian dollar (CAD) $1 (U.S. dollar (USD) $0.75) that the airport authority earns in revenue over CAD $250 million (USD $188 million), it pays CAD $0.12 cents (USD $.09) for the ground lease. For NAS airports, the National Airports Policy devolved responsibility for the operations, management and expenditures of NAS airports from the federal government to Canadian Airport Authorities, which were set up as not-for-profit and non-share corporations. The Canadian government, however, still owns these airports. Under the law, Canadian Airport Authorities pay lease payments to the government under 60-year leases that include an option to renew for 20 years. These airport authorities are required to invest airport-generated revenues in airport operation and capital development. Passenger traffic at Toronto Pearson has increased in recent years and representatives from the Greater Toronto Airports Authority stated that according to their projections, passenger traffic is expected to continue to increase. In 2018, about 48-million passengers traveled through Toronto Pearson—an increase of 2.4 million, or 5 percent, over the prior year. According to these representatives, about 70 percent of this traffic is from origin and destination passengers and 30 percent from connecting passengers. According to the airports authority’s forecasts, passenger traffic at Toronto Pearson is expected to increase to 85 million in 2037. By contrast, for non-NAS airports, the National Airports Policy transferred ownership of these airports from the federal government to regional or local entities, such as local municipalities. The government continues to support remote and Arctic non-NAS airports that service isolated communities. Main Sources of Funding and Financing for Airport Infrastructure Investments Toronto Pearson’s main sources of funding for capital improvements are (1) airline rates and charges, (2) passenger charges, (3) other airport- generated revenues, and (4) debt financing. Toronto Pearson does not receive any government funding, although some limited government funding is available to smaller airports through Canada’s Airports Capital Assistance Program. Airline rates and charges: Toronto Pearson collects revenue from airline rates and charges, which include landing fees, terminal fees for general use of the terminal space, apron fees, deicing facility fees, and other airline charges. According to representatives from the Greater Toronto Airports Authority, airline rates and charges at Toronto Pearson have not been increased since 2012. Toronto Pearson generated about CAD $510 million (USD $393 million) in airline rates and charges in 2018 according to Toronto Pearson’s 2018 annual report. Passenger charges: Passenger charges, called Airport Improvement Fees, are fees charged at every major Canadian airport and currently range from CAD $5 to CAD $40 (USD $3.76 to USD $30.12) per passenger. Each airport authority sets its own passenger fees, and there is no cap on how much each airport can charge. According to an international industry stakeholder, airport authorities, such as the Greater Toronto Airports Authority, set their respective fees based on their analysis of what the market can bear. Toronto Pearson’s passenger fee is CAD $25 (USD $18.82) for departing passengers and CAD $4 (USD $3.01) for passengers connecting through the airport as of January 1, 2019. The airport can only use this revenue for aeronautical-related expenses, such as capital development. The Greater Toronto Airports Authority has an agreement with each air carrier that takes off from and lands at Toronto Pearson whereby air carriers agree to collect passenger fees from each of their enplaned passengers on behalf of the authority. The airports authority commits in these agreements to use passenger-fee revenues for capital programs, including associated debt service. According to representatives from the Greater Toronto Airports Authority, the airport has not increased its passenger fees since 2012, as the increased volume of passengers has generated sufficient revenue for the airport. In 2018, Toronto Pearson generated CAD $460 million (USD $355 million) from passenger fees, in the form of Airport Improvement Fees, according to Toronto Pearson’s 2018 annual report. Other airport-generated revenues: Toronto Pearson also generates revenue from other sources such as airport concessions, rental properties, car rentals, parking, and advertising. The Greater Toronto Airports Authority has more flexibility in how it can use this category of revenue, including for operating costs and for capital needs. According to the Greater Toronto Airports Authority’s 2018 annual report, the long-term objective is to increase the proportion of total revenues generated through commercial streams at the airport—from non-aeronautical sources such as parking, retail, and dining concessions—to over 40 percent. In recent years, commercial revenues have been the fastest growing component of the airport authority’s revenues. In 2018, Toronto Pearson generated about CAD $502 million (USD $387 million) in other airport-generated revenue, according to Toronto Pearson’s 2018 annual report. Debt financing: Canadian airports can generally use equity or raise debt in capital markets. In 2018, Toronto Pearson obtained CAD $500 million (USD $386 million) in bond financing. According to representatives from the Greater Toronto Airports Authority, the authority issues bonds to fund existing bond maturities and capital programs that exceed cash from operations. Revenue from passenger fees, in the form of Airport Improvement Fees, are used to service debt for infrastructure projects. Projects that cost less than CAD $400 million (USD $301 million) are funded with passenger-fee revenues, airline rates and charges, and other airport-generated revenues, according to these representatives. Factors Considered when Setting Airline and Passenger Charges Representatives from the Greater Toronto Airports Authority stated that the structure that Toronto Pearson has in place allows the airport to increase airline rates and passenger charges only when needed to generate sufficient revenue to cover the costs of planned infrastructure. According to these representatives, charges are assessed annually, but only change if there is a material imbalance between required cost recoveries against charges. To establish airline rates and charges and passenger fees, the Toronto Pearson Airport uses the “dual till” model whereby airline and passenger charges are set to recover aeronautical costs only. This contrasts with the “single till” model where all airport activities (including aeronautical and non-aeronautical) are taken into consideration when determining the level of airport charges. Representatives from the Greater Toronto Airports Authority stated that Toronto Pearson is unique among Canadian airports in doing so. Coordination with Airlines on Capital Development As part of Toronto Pearson’s passenger-fee agreements with airlines, the Greater Toronto Airports Authority must consult with airlines and obtain approval for certain capital projects in excess of CAD $50 million (USD $38 million). Approval is sought through an airline consultation committee that the airport authority established to include representatives from airlines that provide service at Toronto Pearson. If the consultative committee does not approve a project, the airport must put the project on hold for 1 year. After the 1-year hold, the project may be initiated. According to representatives from the Greater Toronto Airports Authority, if the airport has a major capital project planned, the authority keeps the airline community informed. In particular, the airport communicates regularly with the two major Canadian airlines, which make up 70 percent of the airport’s service volume, to keep them informed of planned infrastructure improvements. Recent and Planned Infrastructure Investments at Toronto Pearson Airport In 2018, the Greater Toronto Airports Authority completed several infrastructure improvements at Toronto Pearson, according to Toronto Pearson’s 2018 annual report (see fig. 14). Some of these improvements relate to ongoing projects that the airport initiated in prior years. For example, the airports authority is upgrading and expanding its capacity at Terminal 1 to accommodate narrow-body aircraft operations in response to increased passenger traffic. During 2018, the authority expended CAD $16 million (USD $12 million) for this project. In addition, the airport expended about CAD $13 million (USD $10 million) in 2018 to make improvements at Terminal 3, which is intended to enhance passenger experience and improve passenger flow. The Greater Toronto Airports Authority also expended about CAD $23 million (USD $18 million) on Phase 1 of its baggage-handling improvement project, which will add baggage-handling capacity and is intended to improve system reliability. According to representatives from the Greater Toronto Airports Authority, the authority has developed a 5-year capital plan that includes several projects intended to increase capacity and improve passenger flow at the airport. For example, the airports authority has begun the design phase for construction of a new concourse at Terminal 1 and an expansion project at that terminal. The airports authority is also in the design phase for constructing an integrated Regional Transit and Passenger Centre, and replacement of the baggage systems. The airport also plans to add more retail space and provide U.S. Customs and Border Protection space in the terminal to reduce international passengers’ connecting time by improving passenger flow. According to representatives from the Greater Toronto Airports Authority, the estimated cost of its 5-year capital plan is CAD $3.46 billion (USD $2.61 billion), which will allow the airport authority to handle 65 million passengers. Appendix II: Objectives, Scope, and Methodology This report discusses (1) levels of federal and other funding that U.S. airports received from fiscal years 2013 through 2017 for infrastructure investments, (2) projected costs of planned infrastructure investments at U.S. airports from fiscal years 2019 through 2023, and (3) any challenges selected airports face in obtaining airports’ infrastructure funding and financing. We also examined how selected airports in other countries fund and finance airport infrastructure investments. This information is presented in appendix I. To obtain information for all objectives, we reviewed relevant literature, including academic and industry literature on airport funding and financing in the United States and in other countries. We also reviewed laws, regulations, agency guidance, and prior GAO reports related to this topic. To determine what federal and other funding U.S. airports received from fiscal years 2013 through 2017 for infrastructure investments, we obtained and analyzed information on the main sources of airport funding which included: funding from federal Airport Improvement Program (AIP) grants and state grants, revenue from passenger facility charges (PFC), airport-generated revenue, capital contributions, and amounts of financing airports received from bond proceeds and other debt financing. Because comprehensive data on airport capital spending is not available, we framed our research objective to examine funding received rather than how much airports expended on infrastructure projects. We selected fiscal years 2013 through 2017 because it was the most recent 5-year period where complete data were available. For each funding source, we determined average annual-funding amounts for fiscal years 2013 through 2017 for all U.S. national system airports, as well as separately for larger airports and smaller airports. We defined larger airports to include large and medium hubs, and smaller airports to include small hubs, non-hubs, non-primary commercial service, reliever, and general aviation airports. We also analyzed how the amounts of funding received have changed from fiscal years 2013 through 2017. We presented all funding amounts in 2017 dollars. We obtained funding data from various sources, as follows: AIP funding: To determine how much funding airports received from federal AIP grants, we obtained and analyzed data from the Federal Aviation Administration’s (FAA) System of Airports Reporting (SOAR) database on AIP grants awarded by FAA during our study period. This database includes detailed information about AIP grants and PFC applications, approvals, and collections. We analyzed the AIP grant data to determine total annual funding by airport type for fiscal years 2013 through 2017, as well as average annual funding by airport type and project type over the same time period. State grants: Data on state funding for fiscal years 2013 through 2017 are available but are not complete, and we were not able to obtain additional information to verify the data’s reliability. As part of our 2015 review of airports’ infrastructure funding, we conducted a survey in 2014 with the assistance of the National Association of State Aviation Officials (NASAO), to determine how much funding airports received from state grants for fiscal years 2009 through 2013. Results from this survey were reported in our 2015 report and in NASAO’s August 2015 report, NASAO State Aviation Funding and Organizational Data Report. For this review, we interviewed NASAO officials and they confirmed that the level of state funding has largely remained unchanged since the 2015 study. Therefore, we incorporated information from the 2015 survey into our current report. PFCs: To determine how much funding airports received from PFCs, we obtained and analyzed data from the SOAR database on PFC collection amounts at all airports that collected PFCs during fiscal years 2013 through 2017. Because we were unable to obtain data on airports’ expenditures of PFC revenues by project type from fiscal years 2013 through 2017, we instead obtained data on airports’ FAA- approved applications from 1992 through February 2019 showing the types of projects on which airports intended to spend their PFC revenue. Airport-generated revenue: For airport-generated revenue, which we defined as revenue available for capital development, we obtained and analyzed airport financial data from FAA’s Certification Activity Tracking System (CATS). Examples of airport-generated revenue include aeronautical revenue (including revenue earned from leases with airlines and landing fees) and non-aeronautical revenue (such as earnings from airport terminal concessions and vehicle parking fees). We analyzed the financial data to determine the amount of airport- generated revenue that airports had available for infrastructure investments, as well as amounts by airport type, for each fiscal year 2013 through 2017. We calculated airport-generated revenue by using data for the total operating revenue of an airport, subtracted by the subtotal of operating expenses, prior to subtracting depreciation, which yields operating income plus interest income. For data precision, we used a different methodology to calculate airport- generated revenue than that of our 2015 report on airport finance by not subtracting an estimated amount of PFCs used to pay for interest expense. As a result, airport-generated revenue reported here is not comparable to airport-generated revenue in our 2015 report. Airport capital contributions: To determine how much funding airports received from capital contributions, we analyzed the same set of airport financial data from CATS that we used for airport-generated revenue, discussed above. We used the line item for capital contributions (8.5 Capital Contributions) in CATS for our analysis. Airport bonds: In addition to the sources of airport funding listed above, this report also separately discusses information on airport bonding—a common financing mechanism for some airports. We analyzed FAA financial data from the CATS database on the amounts of financing that airports received from bond proceeds (line item 14.1). We also interviewed representatives at two ratings agencies—Fitch Ratings and Moody’s Investors Service—and a representative from Piper Sandler (formerly Piper Jaffray) to obtain their perspectives on the availability of airport bond financing. We assessed the reliability of FAA’s CATS data on airport financial information and SOAR data by reviewing documentation about the data and the systems that produced these data. We also interviewed FAA officials knowledgeable about the collection, maintenance, and security of these data. We also reviewed documentation that also relied on the FAA’s CATS and SOAR data and that was collected for our prior review of airport infrastructure funding and financing for a similar purpose. We determined that these data were sufficiently reliable to report funding and financing that airports received from AIP, PFCs, airport-generated revenue, capital contributions, and bond revenue for fiscal years 2013 through 2017. To determine the projected cost of airports’ planned capital development from fiscal years 2019 through 2023, we combined (1) FAA’s most recent estimate for AIP-eligible development from its Report to Congress National Plan of Integrated Airport Systems (NPIAS) 2019-2023, released in September 2018, and (2) Airports Council International – North America’s (ACI-NA) most recent estimate for AIP-ineligible development for the same time period, as reported in its February 2019 report, Terminally Challenged: Addressing the Infrastructure Funding Shortfall of America’s Airports. We developed estimates of infrastructure development costs for all national system airports, as well as by airport type. We also presented estimates of AIP-eligible development costs by project type; these estimates were based on estimates in the NPIAS report. We did not, however, present estimates of AIP-ineligible data by project type because ACI-NA’s data do not readily support such a presentation. We presented all dollar amounts in 2017 dollars. To identify changes in airports’ project costs of planned infrastructure investments, we also reviewed FAA’s NPIAS report for fiscal years 2017– 2021 and ACI-NA’s report on airports’ capital development needs for fiscal years 2017–2021, and we compared the estimates in those reports to the fiscal years 2019–2023 estimates. ACI-NA’s estimates of U.S. airports’ infrastructure project costs differ from those of FAA’s due to scope, methodology, and other reasons. For example, the ACI-NA cost estimate includes estimates for AIP-eligible and AIP-ineligible projects, while FAA only includes AIP-eligible projects as required by statute. ACI-NA’s estimate also includes projects that have already identified funding sources as well as those that have not. By comparison, FAA only includes projects without identified funding. The methodology that FAA and ACI-NA use to develop their estimates also differs. For example, FAA developed its estimates for the fiscal year 2019 through 2023 time period by reviewing information from airport plans that were available through 2017. According to ACI-NA’s report on airports’ capital development needs for 2019–2023, its cost estimates for fiscal years 2019–2023 are based on a survey of 86 airports completed in 2018. This number represents the airports with 90 percent of all enplanements in 2017. ACI-NA survey respondents were asked to report all infrastructure costs, including interest, construction and management costs, architectural and engineering costs, and contingency costs. FAA’s estimate does not include interest and contingency costs. We reviewed FAA documentation describing the methodology for producing the NPIAS cost estimate from airport-planning documents, and interviewed FAA officials. We determined FAA’s estimate of AIP-eligible planned infrastructure costs to be reliable for the purposes of our report. Similarly, we reviewed ACI-NA’s methodology for developing its report on airports’ capital development needs for 2019–2023 and interviewed ACI-NA representatives about their methodology for developing this estimate. We determined that ACI-NA’s response rates, shares of enplanements represented by the airports that responded, and ACI-NA’s estimation methodology were sufficiently reliable for the purposes of presenting an estimate of planned infrastructure costs for AIP-ineligible projects. To obtain information about any challenges airports face in obtaining airport funding and financing, we reviewed documents from and conducted interviews with representatives from ACI-NA and airport officials from 19 selected U.S. airports. We also interviewed representatives from the American Association of Airport Executives. Through our document review and interviews, we obtained information about the sources of funding and financing that airports currently receive, planned infrastructure projects, and challenges to obtaining funding and financing for these projects. We selected airports representing different hub sizes, airports with the highest planned development costs as reported in FAA’s NPIAS fiscal years 2019–2023 report, airports with increasing and decreasing enplanements in calendar years 2013 through 2017, airports that were mentioned in our literature review and that were recommended by FAA and other stakeholders whom we interviewed, and we considered the geographic location of the airport. We also visited three locations from our selected airports to discuss and view examples of airports’ planned infrastructure projects. The airports we visited included Seattle-Tacoma International Airport, Spokane International Airport, and Paine Field Airport. See table 4 for a list of all the airports where we conducted interviews. We also interviewed representatives from Airlines for America (A4A)—the U.S. airline association—and representatives from eight selected U.S. airlines to obtain their views on airport infrastructure funding and financing issues. We selected airlines with the highest passenger traffic, as measured by revenue passenger miles. In addition, we selected airlines representing legacy and low cost carriers, and airlines that provide service outside the United States. Selected airlines that we interviewed were: Alaska Airlines, American Airlines, Delta Air Lines, Frontier Airlines, JetBlue Airways, Southwest Airlines, Spirit Airlines, and United Airlines. Collectively, the selected airlines transported about 90 percent of total U.S. passenger traffic in 2018. Because we used a nonprobability sample of airport and airlines to interview, our interviews are not generalizable. Last, to obtain information about how foreign airports fund and finance infrastructure development, we reviewed documents from and conducted interviews with international airport associations, international aviation- industry stakeholders, and representatives from four of the five foreign airports that we selected as case studies. These airports included: Toronto Pearson International Airport (Canada); Frankfurt Airport (Germany); Heathrow Airport (United Kingdom); Helsinki Airport (Finland); and Changi Airport (Singapore). Representatives from Frankfurt Airport provided us with written responses and documents for our review. See table 5 for a list of international organizations and foreign airports where we conducted interviews. For each of the five selected foreign airports, we collected information about airport infrastructure funding at the airports, including the sources of funding and financing the airports use, types of projects the airport has planned, and factors they consider when setting airport charges, among other topics. In addition, for each of our case studies, we presented financial information in the appropriate foreign currency as well as in U.S. Dollars (USD) in parentheses. We converted foreign currency information to U.S. Dollars using Federal Reserve data on foreign exchange rates. For 2018 data, we used the Federal Reserve 2018 annual rate. For 2019 data, we calculated a Federal Reserve 2019 annual rate. The primary criterion that we used to select foreign airports as case studies was the ownership model of the airport. To ensure our selection included a mix of ownership models, we selected airports that fit each of the following ownership models: Government owned and operated Government owned and privately operated Partially privatized Not-for-profit, private corporation As secondary criteria, we selected foreign airports with the highest passenger traffic among international airports, airports which had service by U.S. carriers, and airports located in regions where it would be feasible to obtain information and interview officials. Because we used a nonprobability sample of foreign airports to interview, our interviews are not generalizable. While our case studies of foreign airports and their experiences with funding and financing airport infrastructure are not generalizable to all foreign airports, they provide a range of examples of how foreign airports fund and finance airport infrastructure. We conducted this performance audit from September 2018 to February 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jean Cook and Susan Zimmerman (Assistant Directors); Maria Mercado (Analyst-in-Charge); Pin-En Annie Chou; Jessica Du; Sharon Dyer; David Hooper; Delwen Jones; Grant Mallie; Josh Ormond; Pam Snedden; Kelly Rubin; and Rebecca Rygg made key contributions to this report.
Why GAO Did This Study U.S. airports are important contributors to the U.S. economy, providing mobility for people and goods, both domestically and internationally. About 3,300 airports in the United States are part of the national airport system and eligible to receive federal AIP grants to fund infrastructure projects. To help fund these projects, certain categories of airports are also authorized by federal law to collect PFCs, which passengers pay when buying tickets. GAO was asked to examine airport- funding sources and planned infrastructure projects. This report examines, among other issues: (1) levels of federal and other funding that U.S. airports received from fiscal years 2013 through 2017 for infrastructure projects, (2) projected costs of planned infrastructure investments at U.S. airports from fiscal years 2019 through 2023, and (3) any challenges selected airports identified in obtaining projects' funding and financing. GAO analyzed airport-funding data for AIP grants, PFCs, airport-generated revenue, and other sources for fiscal years 2013–2017—the most recent years for which data were available—and FAA's and Airports Council – North America's cost estimates of airports' planned infrastructure projects for fiscal years 2019–2023. GAO also interviewed FAA officials; representatives from airline and airport associations, and bond-rating agencies; officials from 19 selected airports representing airports of different sizes and with the highest planned development costs, among other things; and representatives from eight selected airlines, selected based on factors such as passenger traffic. What GAO Found From fiscal years 2013 through 2017, U.S. airports received an average of over $14 billion annually for infrastructure projects. The three largest funding sources are below: Funding from federal Airport Improvement Program (AIP) grants has remained relatively constant, at an annual average of $3.2 billion. Smaller airports (small hub, non-hub, and general aviation) collectively received more AIP funding compared to larger airports (large and medium hub). Revenue from federally authorized passenger-facility charges (PFC), a per-passenger fee charged at the ticket's point of purchase, increased by 9 percent, with an annual average of $3.1 billion. Increases in passengers and PFC revenue at larger airports contributed to this increase. Airport-generated revenue (e.g., concessions and airline landing fees) increased by 18 percent, with an annual average of $7.7 billion. While both larger and smaller airports experienced increases in these revenues, the larger airports made up 92 percent ($7.1 billion) of these revenues. In addition to these sources, some airports obtained financing by issuing bonds, secured by airport revenue or PFCs. According to Federal Aviation Administration (FAA) data, larger airports were able to generate more bond proceeds than smaller airports in part because larger airports are more likely to have a greater, more certain revenue stream to repay debt. Airports' planned infrastructure costs for fiscal years 2019 through 2023 are estimated to average $22 billion annually (in 2017 dollars)—a 19 percent increase over prior estimates for fiscal years 2017 through 2021. These costs are expected to increase in part because airports are planning to invest in more terminal projects. For example, cost estimates for AIP-eligible terminal projects increased about 51 percent when compared to FAA's prior 5-year estimate. FAA and airport association representatives stated that terminal projects can be more expensive than other projects because of the scale of the improvements, which can include renovating terminals to repair aging facilities and accommodate larger aircraft and growth in passengers. Officials from GAO's 19 selected airports cited several challenges to funding infrastructure projects. For example, officials stated that the funding and revenue they receive from combined sources may not be sufficient to cover the costs of planned infrastructure projects. The officials also raised concerns about being able to finance future airport-infrastructure projects because they have already obligated their current and future PFCs to service debt on completed and ongoing infrastructure projects. According to FAA data, in fiscal years 2013 through 2017, airports paid a total of $12 billion—or 78 percent of total PFC revenues collected—for debt service. Bond-rating agencies, however, continue to give airports high or stable ratings, and rating agencies' representatives stated that airports' access to capital markets continues to remain favorable. Some airport officials stated that to address funding challenges, they have deferred some needed infrastructure investments or completed projects in phases, steps that increased construction times and costs.
gao_GAO-20-436
gao_GAO-20-436_0
Background Whistleblower Protections OSC is an independent federal investigative and prosecutorial agency. Its primary mission is to safeguard the merit system in federal employment by protecting employees and applicants for federal employment from prohibited personnel practices, especially reprisal for whistleblowing. OSC reviews disclosures of wrongdoing within the federal government from current federal employees, former employees, and applicants for federal employment. These individuals, known as whistleblowers, make disclosures of alleged wrongdoing to OSC that the employee reasonably believes evidences either (1) a violation of law, rule, or regulation; (2) gross mismanagement; (3) gross waste of funds; (4) abuse of authority; (5) a substantial and specific danger to public health or safety; or (6) censorship related to research, analysis, or technical information. If a whistleblower believes his or her agency took, threatened to take, or did not take a personnel action because of a protected disclosure, the whistleblower may file a retaliation complaint with OSC. An employee may file a retaliation complaint with OSC even if the protected disclosure was made to another body such as an Inspector General’s office rather than OSC. Various statutory provisions have established protections for federal employee whistleblowers over the years. The Civil Service Reform Act of 1978 provided the first statutory whistleblower protections for disclosures of violations of laws, mismanagement, or gross waste of funds for federal employees, former employees, and applicants for employment. The 1978 act established both the Merit Systems Protection Board (MSPB) and OSC and placed OSC within MSPB. Under the act, OSC was authorized to review allegations of wrongdoing within federal agencies, to investigate and obtain corrective action over allegations of prohibited personnel practices, including whistleblower retaliation, and to initiate disciplinary actions against employees who commit prohibited personnel practices, among other things. Later, to strengthen protections for those who claim whistleblower retaliation, Congress passed the Whistleblower Protection Act of 1989. The 1989 act separated OSC from MSPB, making OSC an independent agency. The act also created the individual right of action, allowing whistleblowers to bring their appeals to MSPB after exhausting remedies at OSC. In 2012, the Whistleblower Protection Enhancement Act clarified the scope of protected whistleblowing under the Whistleblower Protection Act and mandated broader outreach to inform federal employees of their whistleblower rights, among other things. Further, the Dr. Chris Kirkpatrick Whistleblower Protection Act of 2017, among other items, enhanced disciplinary penalties for supervisors who retaliate against whistleblowers. Probationary Status Employees Federal employees in the civil service are required to serve a period of probation when they begin serving initial appointments. These periods are typically for 1 to 2 years, and they allow an agency to evaluate the employee before the appointment becomes final. Our prior work notes that the probationary period provides a way for agencies to dismiss poorly performing employees or those engaging in misconduct before the process to do so becomes more complex and lengthy. In particular, we concluded that the probationary period could be more effectively used by agencies, which in turn could help agencies deal with poor performers more effectively. According to MSPB, the probationary period, if used fully, is one of the most helpful assessment tools available for supervisors to determine an individual’s potential to fulfill the requirements of the specific position. During the probationary period, the employee is still technically considered an applicant for employment. As such, probationary employees do not have the same protections against adverse personnel actions as other employees. Prior to firing a probationary employee for poor job performance or misconduct, an agency does not need to afford the same procedural protections required before removing a non- probationary employee. Therefore, it is reasonable to expect that probationary employees will be terminated at higher rates than permanent employees. Probationary employees also lack the same rights to appeal adverse actions, such as demotions or removals, to the MSPB that other federal employees have. However, probationary employees do have some legal protections. For example, probationary employees may file a complaint with OSC if they believe a personnel action such as reassignment, demotion, or removal was retaliation for whistleblowing. If OSC determines there are reasonable grounds to believe that retaliation has occurred, it may seek corrective action, including filing a petition with the MSPB. Additionally, a probationary employee who has filed a complaint with OSC may subsequently file an individual right of action with MSPB. Probationary employees also may appeal to MSPB if they believe they have been fired for partisan political reasons or because of discrimination based on their marital status. Probationary employees also have the right to file a complaint of discrimination with their agencies and subsequently file an appeal of a final agency decision with the Equal Employment Opportunity Commission or a civil action in federal district court if they believe that they have been discriminated against based on their race, color, religion, sex, national origin, age, disability, or genetic information. Existing Data are Insufficient to Determine if the Rate of Filing Whistleblower Disclosures or Retaliation Complaints Varies by Probationary Status The average annual total of probationary and permanent federal employees from fiscal years 2014 through 2018 was approximately 1.9 million. During the same time period, 14,043 federal employees filed whistleblower disclosures, whistleblower retaliation complaints, or both. That is, an average of roughly 2,800 employees—about 0.15 percent of the federal workforce—filed complaints each year. For whistleblower disclosure complaints, whistleblower retaliation complaints, or both over this 5-year period, we estimate that probationary employees filed between 6.6 percent and 18.2 percent of complaints, while permanent employees filed between 76.8 percent and 93.4 percent of complaints. Because existing data are insufficient to determine probationary status of employees for more than 18 percent of each year’s complaints, it is not possible to determine whether probationary employees file at lower, comparable, or higher rates than their prevalence (about 13.5 percent, on average, across this time period) in the overall employee population. Figure 1 shows how many employees we could determine through matching were in probationary and permanent status when they filed whistleblower disclosure or retaliation complaints, along with the numbers of unmatched complaints for fiscal year 2018. The pattern is similar for the other years we examined; estimates for each year are available in appendix II. Estimates Suggest Probationary Employees Who Filed Complaints Were Consistently Terminated at Higher Rates than Permanent Employees Who Filed, and at Higher Rates than Employees Government-wide Overall, probationary employees—whether or not they have filed a complaint with OSC—are terminated at a higher rate than permanent employees, which is consistent with expectations that determining the suitability of employees for the particular position is a major purpose of the probationary period. In fiscal year 2018, 1.1 percent of probationary employees were terminated, regardless of whether they filed a whistleblower disclosure or retaliation complaint. In the same year, 0.3 percent of permanent employees were terminated, regardless of filing status. These percentages were consistent across the years we studied. As discussed below, estimated termination rates for permanent and probationary employees who filed either or both types of complaints we examined consistently exceeded these government-wide rates. Specifically, among permanent employees who filed, estimated termination rates could be anywhere from 1.7 to 17.1 percentage points higher than the 0.4 percent average for all permanent employees over this period. Among probationary employees who filed, estimated termination rates could be from 5.3 to 72.6 percentage points higher than the 1.3% average for these employees government-wide. Whistleblower disclosures. Estimated termination rates among employees who filed whistleblower disclosures from fiscal years 2014 to 2018 were higher than termination rates among all federal employees. This applies to both probationary and permanent employees. Specifically, estimated termination rates for probationary employees who filed were higher than estimated termination rates for permanent employees who filed. For example, as shown in table 1, in fiscal year 2018: The lowest estimated rate (minimum) of termination among probationary employees who filed whistleblower disclosures was 10.1 percent, compared to the overall 1.1 percent termination rate for all probationary employees. The lowest estimated rate (minimum) of termination among permanent employees who filed whistleblower disclosures was 2.9 percent, compared to the overall 0.3 percent termination rate for all permanent employees. Taking unmatched complaints into account, we estimated that the termination rate for probationary employees who filed whistleblower disclosures could be any percentage from 10.1 to 46.9 percent. Taking unmatched complaints into account, we estimated that the termination rate for permanent employees who filed whistleblower disclosures could be any percentage from 2.9 to 5.2 percent. The minimum estimated termination rate for probationary employees (10.1 percent) who filed whistleblower disclosures exceeds the maximum estimated rate for permanent employees who filed whistleblower disclosures (5.2 percent). Whistleblower retaliation complaints. We found that the lowest possible rates (minimums) of termination for employees who filed whistleblower retaliation complaints were higher than termination rates among all federal employees, both for probationary and permanent employees. Specifically, estimated termination rates for probationary employees who filed were higher than estimated termination rates for permanent employees who filed. For example, as shown in table 2, in fiscal year 2018: The lowest estimated rate (minimum) of termination for probationary employees who filed retaliation complaints was 17.4 percent, compared to the overall 1.1 percent termination rate for all probationary employees. The lowest estimated rate (minimum) of termination for permanent employees who filed retaliation complaints was 5.5 percent, compared to the overall 0.3 percent termination rate for all permanent employees. Taking unmatched complaints into account, we estimated that the termination rate for probationary employees who filed whistleblower retaliation complaints could be any percentage from 17.4 to 69.4 percent. Taking unmatched complaints into account, we estimated that the termination rate for permanent employees who filed retaliation complaints could be any percentage from 5.5 to 9.9 percent. The minimum estimated termination rate for probationary employees who filed retaliation complaints (17.4 percent) exceeds the maximum estimated rate for permanent employees who filed retaliation complaints (9.9 percent). Both whistleblower disclosures and retaliation complaints. For the category of employees who filed both whistleblower disclosures and retaliation complaints, termination rates were higher than termination rates among all federal employees, both for probationary and permanent employees. Specifically, estimated termination rates for probationary employees who filed were higher than estimated termination rates for permanent employees who filed. For example, as shown in table 3, in fiscal year 2018: The lowest estimated rate (minimum) of terminations among probationary employees who filed both whistleblower disclosures and retaliation complaints was 14.1 percent, compared to the overall 1.1 percent termination rate for all probationary employees. The lowest estimated rate (minimum) of terminations among permanent employees who filed both types of complaints was 7.8 percent, compared to the overall 0.3 percent termination rate for all permanent employees. Taking unmatched complaints into account, we estimated that the termination rate for probationary employees who filed both types of complaints could be any percentage from 14.1 to 56.3 percent. Taking unmatched complaints into account, we estimated that the termination rate for permanent employees who filed both types of complaints could be any percentage from 7.8 to 13.2 percent. The minimum estimated termination rate for probationary employees who filed both a whistleblower disclosure and a retaliation complaint (14.1 percent) exceeds the maximum estimated rate for permanent employees who, filed both types of complaints (13.2 percent). As previously discussed, probationary employees being terminated at a higher rate than permanent employees is consistent with expectations, given that determining the suitability of employees for the particular position is a major purpose of the probationary period. However, the higher rate of termination for filers generally, and the higher estimated rates for probationary employees specifically, suggests a potential relationship between filing and terminations that may disproportionately impact probationary employees. As stated earlier, we did not determine whether the disclosures and complaints filed had merit, whether termination actions were justified, or whether the terminations occurred before or after the filing of the whistleblower disclosure or retaliation complaint. As such, further examination would be needed to fully understand these relationships. OSC Does Not Require Filers to Identify Probationary Status OSC requires federal employees to use OSC Form-14 to submit a complaint alleging a prohibited personnel practice or a disclosure. Complainants begin the process by selecting a checkbox based on their particular complaint or disclosure. Depending on their selections, complainants are asked to provide additional information. Data fields on the form that are marked with an asterisk are mandatory. OSC instructions state that the agency cannot process forms lacking necessary information. OSC Form-14 includes a non-mandatory data field that asks whether the complainant is currently a probationary employee. Because it is not a required field, complainants may choose not to provide that information. According to OSC, it has designated only a limited amount of requested information as mandatory. OSC officials said that to avoid creating impediments for employees to file complaints, mandatory fields are limited to the information that is necessary for processing a complaint. In August 2019, according to OSC officials, OSC transitioned to a new electronic Case Management System (eCMS). This new system’s electronic version of the complaint form includes a data field as part of the question about employee status. Here employees can check off probationary status for OSC to capture and input complainants’ probationary status. According to OSC, when complainants provide this information, the agency is able to track the information in eCMS. OSC officials estimated that a number of filers voluntarily provide information on probationary status; however, the officials could not specify to what extent filers provide that information in their initial filings, or the extent to which this data is collected during processing of the case. OSC’s mission is to “safeguard the merit system by protecting federal employees and applicants from prohibited personnel practices, especially reprisal for whistleblowing.” Additionally, OSC’s 2017-2022 strategic plan includes an objective to ensure agencies provide timely and appropriate outcomes for referred whistleblower disclosures. One of the agency’s strategies to help achieve that objective is to monitor all whistleblower disclosures and referrals to agencies to identify trends or systemic challenges. Further, Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives. OSC officials stated that OSC’s routine administration of disclosures and complaints allow them to identify trends. However, this process does not consistently use standard, structured data to identify trends, but rather relies on the personal experience of investigators. Without consistent quality information, including information on probationary status, OSC cannot have reasonable assurance that it is adequately identifying trends and challenges. OSC told us that because of limited resources it currently has no plans to conduct data studies or analyses of employees in their probationary period who file whistleblower claims. As previously discussed, the higher rates of termination we found for complainants, and in particular for probationary employees, suggests a potential relationship that warrants further examination. However, without consistent identification of probationary employees who file whistleblower claims, OSC will continue to lack complete data that would enable this analysis and support OSC’s goal of identifying trends and systemic challenges. Collecting and maintaining such information on every claimant, which could now be more easily done under eCMS, would provide OSC or other entities the ability to analyze termination rates or other issues related to a whistleblower’s probationary status. Having more complete information on trends and challenges could help OSC to ensure that its current level of resources are being distributed to support its mission. Conclusions Probationary employees, by definition, are relatively new to their positions and are thus uniquely vulnerable to retaliation from employers due to the limited protections afforded them. Our estimates demonstrate that employees who file whistleblower disclosures and complaints of retaliation are terminated at a higher rates than employees government- wide, and suggest that these differences may be more pronounced for probationary employees. OSC has roles and responsibilities related to understanding key trends and challenges for whistleblowers, and could potentially further investigate whether these differences indicate a particular risk for probationary employees. However, they are not collecting data on probationary status that would enable them to do so. Without consistent information on probationary status, OSC is unable to properly analyze the effect of that status on those who file whistleblower disclosures, retaliation complaints, or both; and thus, cannot have reasonable assurance there is equal treatment of probationary employees. Recommendation for Executive Action The Office of Special Counsel should require federal employees who are filing whistleblower disclosures or retaliation complaints to identify on their complaint forms their status as a permanent or probationary employee. Agency Comments and Our Evaluation We provided a draft of this report to OSC for review and comment. In its written comments, reproduced in appendix III, OSC disagreed with our conclusions and recommendation. While we continue to believe that our conclusions and recommendation are fully supported by the evidence— as discussed below—we made minor clarifications to our report to more clearly state the nature of our findings in response to OSC’s comments. OSC also provided technical comments, which we incorporated as appropriate. In its written comments, OSC expressed a concern that our report overreaches. OSC stated that our report appears to draw its conclusions based on correlative instead of causative data. Specifically, OSC stated that our report appears to connect the expected greater rate of termination of probationary employees to whistleblower retaliation, based on correlative data and without taking into account key factors such as justification for the termination, timing in relation to the disclosure or the filing of a complaint, or the merit of the individual’s complaint. Absent this type of crucial, detailed analysis that could help determine causation, OSC stated that few, if any, conclusions can be drawn regarding alleged retaliation experienced by probationary employees. As stated in our draft report, and noted by OSC, our estimates demonstrate that employees who file whistleblower disclosures and complaints of retaliation are terminated at higher rates than employees government-wide, and the estimates suggest that these differences may be more pronounced for probationary employees. Our draft report acknowledged that we did not assess certain factors: (1) whether the disclosures and complaints filed had merit, (2) whether the termination actions were justified, or (3) whether the termination actions occurred before or after the filing of the whistleblower disclosure or retaliation complaint. Because we did not control for these factors, we did not speculate about what caused these differences to occur or make causal claims about the relationship between probationary status and whistleblower retaliation. Instead, we stated that further examination and analysis would be needed to fully understand this indicator of potential risk. As we noted in the report, such analysis would require complete and accurate data on probationary status—data which OSC does not currently collect. Therefore, we recommended that OSC collect more complete data so that OSC could, if it chose, do exactly the type of crucial, detailed analysis that it says could help determine causation. Accordingly, we continue to believe that our recommendation for OSC to collect complete and accurate data on probationary status is warranted as such analysis is not possible without it. OSC also expressed a concern that our report appears to suggest that it perhaps may not be doing enough to protect probationary employees. OSC asserted that it already has reasonable assurance that it is appropriately protecting probationary employees from unlawful retaliation. We did not assess OSC’s review of the filed disclosures and complaints, and we made no claims or implications about whether OSC’s protection of whistleblowers is adequate or appropriate. Our report uses one specific outcome (terminations) as an example of an adverse employment action that could potentially signal retaliation. We did not present any findings about whether terminations were warranted, whether employees were appropriately protected, or any other information related to OSC’s handling of cases. We continue to believe, however, that OSC’s ability to run relevant data reports is constrained when the necessary data are not collected for the total population of filers. Without consistent quality information, including information on probationary status of all filers, OSC cannot have reasonable assurance that it is adequately identifying trends and challenges. Lastly, OSC stated that making employment status fields mandatory is onerous and unnecessary and that singling out probationary status from the list seems arbitrary and incomplete. The agency stated that the form includes the option for the individual to self-identify as a probationary employee, which OSC believes is sufficient. We do not believe that changing a field from optional to mandatory would place an undue burden on filers or OSC. We are sending copies of this report to relevant congressional committees, the Special Counsel and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2717 or jonesy@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) analyze the extent to which employees who filed whistleblower disclosures and retaliation complaints were in a probationary status, (2) analyze the extent to which these filings were associated with differences in termination rates, and (3) examine Office of Special Counsel (OSC) procedures related to probationary employees. We reviewed the Office of Special Counsel’s OSC 2000 database design documentation and submitted questions to OSC officials to determine what data were available. OSC does not collect or maintain data that identify whistleblowers and retaliation complaints filed by employees in probationary status in OSC 2000. In late August of 2019 OSC officials state that in late August of 2019 OSC launched a new system called the electronic Case Management System (eCMS) to replace OSC 2000. We submitted a series of questions pertaining to how OSC will collect and maintain probationary status information of employees filing complaints in eCMS. These questions pertained to the functionality of and reporting capability of eCMS in addition to OSC’s ability to conduct analysis of complainants who are in probationary status using eCMS. We obtained all closed whistleblower disclosure case data and closed prohibited personnel practices complaint data with allegations related to whistleblower retaliation from 2014 to 2018 from OSC’s previous electronic case management system (OSC 2000). We also requested and obtained 2014 to 2018 OPM Enterprise Human Resources Integration (EHRI) data. OSC 2000 is a case management system, so it was necessary to use combinations of variables associated with complaints filed, such as first name, last name, agency, email address, and job series to identify individual employees. We analyzed employees from federal agencies that submit human resources information to OPM. Factors such as complaints filed anonymously, name changes, and spelling variations could affect the precision of these counts of employees. However, because we are presenting these data in broad ranges throughout the report, these limitations do not likely affect our overall findings and message. After identifying employees in the OSC 2000 data, we then matched OSC 2000 data to OPM’s EHRI data. This was necessary because the OSC 2000 database does not include the probationary status of people filing complaints with OSC. We started by matching unique name and agency combinations. If that was not sufficient, we attempted to match using variables such as state, job series, and employee work email address. We matched OSC 2000 data to EHRI data using case data from OSC 2000 and federal probationary status as of the end of the fiscal year date from EHRI. We acknowledge that matching using these dates may not be precise, but because we present our results in ranges, we do not believe a more precise matching of dates would have resulted in substantive differences in the results overall. We matched 82 percent of the complaints in OSC 2000 to employees in EHRI. . Because it is not possible to determine the probationary status for unmatched cases, the rates of filing among matched cases may not precisely reflect the overall rates for all probationary employees. To account for this uncertainty, we estimated minimum and maximum rates of filing for permanent and probationary employees, and present these ranges in addition to the specific matched rates. Further, we calculated the number of instances in which matched employees who filed either a whistleblower disclosure or a retaliation complaint were terminated from federal employment. As we did with filing rates, we also estimated minimum and maximum termination rates to account for the uncertainty introduced by unmatched cases. Terminations were used because they represent adverse consequences for employees which could indicate retaliation. While other indicators, such as transfers could represent a potential retaliatory action, we focus on terminations because this is the most serious adverse action for which probationary employees have the little protection, and because OSC officials indicated that complaints with termination are prioritized. We did not determine (1) whether the disclosures or complaints had merit, (2) whether the termination actions were justified, or (3) whether the termination actions were before or after the filing of the whistleblower disclosure or retaliation complaint. Because these estimates do not consider the timing or merit of terminations, or other factors potentially associated with terminations, they do not represent proof of a causal relationship between filing and terminations, but rather one indicator of potential risk. To produce reasonably conservative estimates, we made certain assumptions in estimating the minimum and maximum rates in our ranges. Specifically, for unmatched cases we assumed that unknown characteristics, including probationary status and termination rate could be as much as 3.5 times their observed rate in known data. We believe these assumptions are reasonably conservative. While it is not impossible for this small group of unmatched complaints to be even more skewed, there is no evidence to suggest such an extreme assumption would be warranted. We assessed the reliability of the OSC 2000 and EHRI databases for the purposes of using limited data from these databases for our own analysis. We reviewed agency documents, electronically tested data for missing data and outliers, and submitted questions to agency officials about these databases. These two databases are the only sources of data that can be compared to determine the probationary status of individuals filing complaints with OSC. We determined that OSC’s data were sufficiently reliable to present the number of complaints filed by type. With regard to probationary status, the data were not available in OSC 2000. As a result, probationary status and termination rates were drawn from EHRI, which we found to be sufficiently reliable for this purpose. We conducted this performance audit from January 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Employees Filing Whistleblower Disclosures and Retaliation Complaints, Fiscal Years 2014-2018 The figure shown below details the distribution of probationary matched, permanent matched,and unmatched complaints for fiscal years 2014- 2018. Appendix III: Comments of the Office of Special Counsel Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Clifton G. Douglas Jr. (Assistant Director), Katherine Wulff (Analyst-In-Charge), Michael Bechetti, Karin Fangman, Steven Flint, Robert Gebhart, Steven Putansu and Wesley Sholtes made key contributions to this report.
Why GAO Did This Study Federal employee whistleblowers—individuals who report allegations of wrongdoing—potentially help to safeguard the government from fraud, waste, and abuse. OSC was created to help protect whistleblowers. Probationary employees—generally those with less than 1 or 2 years of federal service—can be especially vulnerable to reprisal because they have fewer protections from adverse personnel actions, including termination. A 2017 law included a provision for GAO to examine retaliation against whistleblowers in their probationary period. This report examines (1) the extent to which probationary employees filed whistleblower disclosures or reprisal complaints, (2) termination rates of complainants, and (3) OSC procedures related to probationary employees. GAO used complaint data and workforce data to identify the probationary status of employees who filed claims with OSC from fiscal year 2014 to 2018 (the most recent full years of available data); estimated the number of instances where claimants were terminated; and reviewed OSC procedures. What GAO Found GAO found that existing data are not sufficient to determine if the rates of filing whistleblower disclosures, retaliation complaints, or both vary by probationary status. The average annual number of probationary and permanent federal employees from fiscal years 2014 to 2018 was approximately 1.9 million employees. Over this time frame, an average of approximately 2,800 employees—about 0.15 percent—filed complaints each year. Existing data were not sufficient to determine probationary status of employees for over 18 percent of each year's complaints. Therefore, it is not possible to determine whether probationary employees file at lower, comparable, or higher rates than their prevalence in the overall employee population. Specifically, probationary employees represented about 13.5 percent, on average, of the federal workforce, and GAO estimates that they filed from 6.6 percent to 18.2 percent of complaints. GAO estimates suggest that both permanent and probationary employees who filed complaints were consistently terminated at higher rates than federal employees government-wide. For example, in fiscal year 2018, the termination rate for probationary employees government-wide was 1.1 percent, while the lowest estimated rate of termination among probationary employees who filed a complaint was 10.1 percent. For permanent employees, the overall termination rate was 0.3 percent, while the lowest estimated rate for filers was 2.9 percent. GAO estimates also suggest that probationary employees who filed complaints were terminated at higher rates than permanent employees who did the same. For example, in fiscal year 2018: The lowest estimated termination rate for probationary employees who filed whistleblower disclosures (10.1 percent) exceeded the maximum estimated rate for permanent employees who did the same (5.2 percent). The lowest estimated termination rate for probationary employees who filed retaliation complaints (17.4 percent) exceeded the maximum estimated rate for permanent employees who did the same (9.9 percent). The lowest estimated termination rate for probationary employees who filed both types (14.1 percent) exceeded the maximum estimated rate for permanent employees who did the same (13.2 percent). The Office of Special Counsel's (OSC) complaint form allows but does not require complainants to identify whether they are probationary or permanent employees when filing a whistleblower disclosure or retaliation complaint. OSC officials said they try to limit mandatory data fields to the information that is necessary for processing a case, and that they have no plans to do any analysis of employees in their probationary period who file claims. However, the higher rates of termination GAO found for filers generally, and probationary employees specifically, suggests that there could be a risk of unequal treatment. Without first identifying probationary employees who file whistleblower claims, OSC would lack complete data should it decide at some point to analyze the effect of probationary status on filers. Collecting and maintaining such data on every claimant would provide OSC or other entities the ability to analyze termination rates or other issues related to a whistleblower's probationary status. What GAO Recommends GAO recommends that OSC require claimants to identify their status as permanent or probationary employees. OSC disagreed with GAO's recommendation. GAO continues to believe the recommendation is valid, as discussed in the report.
gao_GAO-20-626T
gao_GAO-20-626T_0
Background Globalization of Drug Manufacturing Drugs sold in the United States—including active pharmaceutical ingredients (APIs) and finished dosage forms—are manufactured throughout the world. According to FDA, as of August 2019 about 70 percent of establishments manufacturing APIs and more than 50 percent of establishments manufacturing finished drugs for the U.S. market were located overseas. As of March 2019, FDA data showed that India and China had the most manufacturing establishments shipping drugs to the United States, with about 40 percent of all foreign establishments in these two countries. (See fig. 1.) Types of Inspections FDA is responsible for overseeing the safety and effectiveness of all drugs marketed in the United States, regardless of where they are manufactured. Drugs manufactured overseas must meet the same statutory and regulatory requirements as those manufactured in the United States. FDA’s Center for Drug Evaluation and Research (CDER) establishes standards for the safety, quality, and effectiveness of, and manufacturing processes for, over-the-counter and prescription drugs. CDER requests that FDA’s Office of Regulatory Affairs (ORA) inspect both domestic and foreign establishments to ensure that drugs are produced in conformance with applicable laws of the United States, including current good manufacturing practice (CGMP) regulations. FDA investigators generally conduct three main types of drug manufacturing establishment inspections: preapproval inspections, surveillance inspections, and for-cause inspections, as described in table 1. At times, FDA may conduct an inspection that combines both preapproval and surveillance inspection components in a single visit to an establishment. FDA uses multiple databases to select foreign and domestic establishments for surveillance inspections, including its registration database and inspection database. Because the establishments are continuously changing as they begin, stop, or resume marketing products in the United States, CDER creates a monthly catalog of establishments. The establishments in the catalog are prioritized for inspection twice each year. In our 2008 report we found that, because of inaccurate information in FDA’s databases, the agency did not know how many foreign drug establishments were subject to inspection. For example, some establishments included in FDA’s registration database may have gone out of business and did not inform FDA that they had done so, or they did not actually manufacture drugs for the U.S. market. In our report, we noted that some foreign establishments may register because, in foreign markets, registration may erroneously convey an “approval” or endorsement by FDA, when in fact the establishment may never have been inspected by FDA. We recommended that FDA take steps to improve the accuracy of this registration information. In our 2010 and 2016 reports we found that FDA had taken steps to improve the accuracy and completeness of information in its catalog of drug establishments subject to inspection, such as using contractors to conduct site visits to verify the existence of registered foreign establishments and confirm that they manufacture the products that are recorded in U.S. import records. To prioritize establishments for surveillance inspections, CDER applies a risk-based site selection model to its catalog of establishments to identify those establishments (both domestic and foreign) that, based on the characteristics of the drugs being manufactured, pose the greatest potential public health risk should they experience a manufacturing defect. This model analyzes several factors, including inherent product risk, establishment type, inspection history, and time since last inspection, to develop a list of establishments that FDA considers to be a priority for inspection. Through this process, CDER develops a ranked list of foreign and domestic establishments selected for inspection that is submitted to ORA. To be efficient with its resources, ORA staff may shift the order of establishments to be inspected on CDER’s prioritized list based on geographic proximity to other planned inspection trips, according to FDA officials. FDA Inspection Workforce Investigators from ORA and, as needed, ORA laboratory analysts with certain expertise are responsible for inspecting drug manufacturing establishments. FDA primarily relies on three groups of investigators to conduct foreign inspections: ORA investigators based in the United States, who primarily conduct domestic drug establishment inspections but may sometimes conduct foreign inspections. Members of ORA’s dedicated foreign drug cadre, a group of domestically based investigators, who exclusively conduct foreign inspections. Investigators assigned to and living in the countries where FDA has foreign offices, who include both staff based in the foreign offices full time and those on temporary duty assignment to the foreign offices. FDA began opening offices around the world in 2008 to obtain better information on the increasing number of products coming into the United States from overseas, to build relationships with foreign stakeholders, and to perform inspections. FDA full-time foreign office staff are posted overseas for 2-year assignments. FDA staff can also be assigned to the foreign offices on temporary duty assignments for up to 120 days. In fiscal year 2019, there were full-time and temporary duty drug investigators assigned to FDA foreign offices in China and India. Post-Inspection Activities FDA’s process for determining whether a foreign establishment complies with CGMPs involves both CDER and ORA. During an inspection, ORA investigators are responsible for identifying any significant objectionable conditions and practices and reporting these to the establishment’s management. Investigators suggest that the establishment respond to FDA in writing concerning all actions taken to address the issues identified during the inspection. Once ORA investigators complete an inspection, they are responsible for preparing an establishment inspection report to document their inspection findings. Inspection reports describe the manufacturing operations observed during the inspection and any conditions that may violate U.S. statutes and regulations. Based on their inspection findings, ORA investigators make an initial recommendation regarding whether regulatory actions are needed to address identified deficiencies using one of three classifications: no action indicated (NAI); voluntary action indicated (VAI); or official action indicated (OAI). Inspection reports and initial classification recommendations for regulatory action are to be reviewed within ORA. For inspections classified as OAI—where ORA identified serious deficiencies—such inspection reports and classification recommendations are to be reviewed within CDER. CDER is to review the ORA recommendations and determine whether regulatory action is necessary. CDER also is to review inspection reports and initial classification recommendations for all for-cause inspections, regardless of whether regulatory action is recommended by ORA. According to FDA policy, inspections classified as OAI may result in regulatory action, such as the issuance of a warning letter. FDA issues warning letters to those establishments manufacturing drugs for the U.S. market that are in violation of applicable U.S. laws and regulations and may be subject to enforcement action if the violations are not promptly and adequately corrected. In addition, warning letters may notify foreign establishments that FDA may refuse entry of their drugs at the border or recommend disapproval of any new drug applications listing the establishment until sufficient corrections are made. FDA may take other regulatory actions if it identifies serious deficiencies during the inspection of a foreign establishment. For example, FDA may issue an import alert, which instructs FDA staff that they may detain drugs manufactured by the violative establishment that have been offered for entry into the United States. In addition, FDA may conduct regulatory meetings with the violative establishment. Regulatory meetings may be held in a variety of situations, such as a follow-up to the issuance of a warning letter to emphasize the significance of the deficiencies or to communicate documented deficiencies that do not warrant the issuance of a warning letter. The Number of Foreign Inspections Declined in Recent Years, and the Majority of Such Inspections Identified Deficiencies Total Number of FDA Foreign Drug Inspections Has Decreased Since Fiscal Year 2016 after Several Years of Increases In December 2019, we found that from fiscal year 2012 through fiscal year 2016, the number of FDA foreign drug manufacturing establishment inspections increased but then began to decline after fiscal year 2016. In fiscal year 2015, the total number of foreign inspections surpassed the number of domestic inspections for the first time. However, from fiscal year 2016 through 2018, both foreign and domestic inspections decreased—by about 10 percent and 13 percent, respectively. FDA officials attributed this decrease to vacancies in the number of investigators available to conduct inspections (which we discuss later in this testimony statement) and to inaccurate data used to select establishments for inspection in fiscal years 2017 and 2018. Despite steps taken to improve the accuracy and completeness of FDA data on foreign establishments, in December 2019, we found that the data challenges we identified in our 2008 report continue to make it difficult for FDA to accurately identify establishments subject to inspection. Specifically, since 2017, FDA had pursued an initiative to inspect approximately 1,000 foreign establishments that lacked an inspection history. As of November 2019, officials said all of these establishments had either been inspected or were determined not to be subject to inspection because it was determined they did not actually manufacture drugs for the U.S. market, or had not recently shipped drugs to the United States. However, officials told us that this effort contributed to the decline in the number of foreign inspections conducted because of how data inaccuracies affected the process for selecting establishments for inspection. Specifically, after selecting uninspected foreign establishments for inspection, FDA determined that a sizeable percentage of these establishments were not actually subject to inspection (e.g., about 40 percent of those assigned to the China Office in fiscal years 2017 and 2018). These foreign establishments were thus removed from the list for inspection for the given year. FDA officials told us that the next highest priority establishments identified through the risk- based model to replace those establishments were domestic establishments. As a result, the number of foreign establishments actually inspected decreased. As part of our ongoing work, we plan to examine the accuracy and completeness of information FDA maintains about foreign establishments and the application of its risk-based site selection process. We further found that FDA continued to conduct the largest number of foreign inspections in India and China, with inspections in these two countries representing about 40 percent of all foreign drug inspections from fiscal year 2016 through 2018. (See table 2.) In addition to India and China, the rest of the countries in which FDA most frequently conducted inspections has generally been the same since our 2008 report. Since we last reported on this issue, FDA announced in March 2020 that, due to COVID-19, it was postponing most inspections of foreign manufacturing establishments. Only inspections deemed mission-critical would still be considered on a case-by-case basis. According to the announcement, while the pandemic has added new complexities, FDA has other tools to ensure the safety of the U.S. drug supply. For example, FDA announced that it was evaluating additional ways to conduct its inspectional work that would not jeopardize public safety and would protect both the establishments and the FDA staff. Such ways, according to FDA, could include reviewing the compliance histories of establishments, using information shared by foreign regulatory partners, and evaluating establishment records in lieu of an onsite inspection. In addition, the FDA Commissioner’s May 11, 2020 press statement stated that while FDA’s regulatory oversight is vital to the long-term health of America, product safety and quality are ultimately the establishment’s responsibility. Most firms, according to FDA, strive to reliably provide quality products and maintain the integrity of the supply chain. However, the lack of foreign inspections removes a critical source of information about the quality of drugs manufactured for the U.S. market. It is not clear when FDA will resume regular inspections. The agency originally announced the postponement would last through April 2020. However, on May 11, 2020, it stated that the postponement would continue. According to FDA, the agency continues to closely monitor the global situation. FDA stated that it remains in contact with its foreign regulatory counterparts and would work with the Centers for Disease Control and Prevention to develop a process that would govern how and where to return to on-site facility inspections as conditions improve. Most Foreign Inspections Were for Surveillance In December 2019, we found that each year from fiscal year 2012 through 2018 at least 50 percent of FDA’s foreign inspections were surveillance inspections. In contrast to preapproval inspections, surveillance inspections are used to ensure drugs already on the market are manufactured in compliance with FDA regulations. In recent years, the proportion of foreign surveillance inspections has increased. As figure 2 shows, in fiscal year 2012, 56 percent of foreign inspections were surveillance-only inspections; in contrast, from fiscal year 2016 through 2018, about 70 percent of foreign inspections were surveillance-only, which was comparable to the percentage for domestic inspections during that period. This is a significant increase from the 13 percent of foreign inspections that were surveillance-only when we made our 2008 recommendation that FDA inspect foreign establishments at a comparable frequency to their domestic counterparts (85 percent of which were surveillance-only at that time). In our December 2019 testimony, we also reported that FDA implemented changes to its foreign drug inspection program since our 2008 report that may have contributed to the increase in surveillance inspections. Prior to 2012, FDA was required to inspect domestic establishments that manufacture drugs marketed in the United States every 2 years, but there was no similar requirement for foreign establishments. As a result, and as we reported in 2008, foreign inspections were often preapproval inspections driven by pending applications for new drugs. FDA thus conducted relatively few surveillance-only inspections to monitor the ongoing compliance of establishments manufacturing drugs that were already on the market, with just 13 percent of foreign inspections conducted for surveillance purposes at the time of our 2008 report. However, in 2012, the Food and Drug Administration Safety and Innovation Act eliminated the 2-year requirement for domestic inspections, directing FDA to inspect both domestic and foreign establishments on a risk-based schedule determined by an establishment’s known safety risks, which was consistent with our 2008 recommendation. FDA Identified Deficiencies during the Majority of Foreign Inspections In December 2019, we found that from fiscal year 2012 through 2018, FDA identified deficiencies in approximately 64 percent of foreign drug manufacturing establishment inspections (3,742 of 5,844 inspections). This includes deficiencies necessitating a classification of VAI, or the more serious OAI, as described in the text box. Based on their inspection findings, FDA investigators make an initial recommendation regarding the classification of each inspection: No action indicated (NAI) means that insignificant or no deficiencies were identified during the inspection. Voluntary action indicated (VAI) means that deficiencies were identified during the inspection, but the agency is not prepared to take regulatory action, so any corrective actions are left to the establishment to take voluntarily. Official action indicated (OAI) means that serious deficiencies were found that warrant regulatory action. About 59 percent of domestic inspections (3,702 out of 6,291) identified deficiencies during this time period. (See fig. 3.) This proportion is similar to what we found when we last looked at this issue in 2008, when FDA identified deficiencies in about 62 percent of foreign inspections and 51 percent of domestic inspections from fiscal years 2002 through 2006. Our December 2019 analysis showed that serious deficiencies identified during foreign drug inspections classified as OAI—which represented 8 percent of inspections from fiscal year 2012 through 2018—include CGMP violations such as those related to production and process controls, equipment, records and reports, and buildings and facilities. For example: Failure to maintain the sanitation of the buildings used in the manufacturing processing, packing, or holding of a drug product (21 C.F.R. § 211.56(a) (2019)). At an establishment in India producing finished drug products, the investigator reported observing a live moth floating in raw material used in the drug production, and that the facility staff continued to manufacture the drug products using the raw material contaminated by the moth, despite the investigator pointing out its presence. Failure to perform operations relating to the manufacture, processing, and packing of penicillin in facilities separate from those used for other drug products (21 C.F.R. § 211.42 (d) (2019)). At an establishment in Turkey that manufactured penicillin and other drugs, the investigator reported that the manufacturer had detected penicillin outside the penicillin manufacturing area of the establishment multiple times. According to FDA, penicillin contamination of other drugs presents great risk to patient safety, including potential anaphylaxis (even at extremely low levels of exposure) and death. Some investigators who conduct foreign inspections expressed concern with instances in which ORA or CDER reviewers reclassified the investigator’s initial inspection classification recommendations of OAI to the less serious classification of VAI. FDA Continued to Face Challenges Filling Vacancies among Staff Conducting Foreign Inspections In December 2019, we found that FDA’s foreign inspection workforce had staff vacancies, which FDA officials said contributed to the recent decline in inspections. As previously mentioned, FDA used multiple types of staff resources to conduct foreign drug inspections—including ORA investigators based in the United States, members of ORA’s dedicated foreign drug cadre based in the United States, and investigators assigned to FDA’s foreign offices. However, we found that each of these groups had current vacancies. At the time of our December testimony, FDA officials told us that the agency was trying to fill vacancies in each of these groups, but the lower staff numbers may limit FDA’s ability to conduct more foreign inspections. ORA investigators based in the United States. This group of investigators conducted the majority of foreign inspections; about 76 percent of foreign inspections in fiscal year 2018 involved an ORA investigator based in the United States who conducts both foreign and domestic inspections. FDA officials said that the more experienced investigators from this group are expected to conduct three to six foreign inspections per year, and investigators hired using generic drug user fees are expected to inspect nine to 12 foreign establishments per year. As of June 2019, there were 190 investigators eligible to conduct foreign drug inspections, but officials said that as of November 2019, the agency had an additional 58 vacancies in this group. At the time of our December 2019 testimony, officials said that the agency was in the process of hiring 26 ORA investigators based in the United States to fill these vacancies, with 32 vacancies remaining. FDA officials attributed the vacancies to multiple factors: investigator retirements, investigator movement to other parts of FDA, and the need to hire to additional investigator positions using generic drug user fees. Officials also said that a reorganization within ORA led to a reduced number of investigators who conduct drug manufacturing establishment inspections. While FDA had recently filled several of the vacancies, officials told us that new investigators are not typically used for foreign inspections until they have been with the agency for 2 to 3 years. ORA dedicated foreign drug cadre. About 15 percent of foreign inspections in fiscal year 2018 involved an investigator from ORA’s dedicated foreign drug cadre—a group of ORA investigators based in the United States who exclusively conduct foreign inspections. FDA officials said that members of the cadre are expected to conduct 16 to 18 foreign inspections each year. According to FDA, the cadre had 20 investigators in 2012 and 15 investigators in 2016. However, the cadre had only 12 investigators as of November 2019, out of 20 available slots. At the time of our December 2019 testimony, FDA officials told us that the agency was attempting to fill these positions from the current ORA investigator pool, but officials were not confident that all 20 slots would be filled. Investigators assigned to FDA’s foreign offices. Approximately 7 percent of foreign inspections in fiscal year 2018 involved investigators from FDA’s foreign offices. The investigators conducting these inspections were those based in the China and India foreign offices—the countries where most drug inspections occur—and also included those investigators on temporary duty assignment to these offices. According to FDA officials, these investigators are expected to conduct 15 foreign inspections each year. We have noted high vacancy rates for these foreign offices in past reports. While these vacancy rates have decreased over time, vacancies persist. As of November 2019, FDA’s China office had three of 10 drug investigator positions vacant (a 30 percent vacancy rate), while FDA’s India office had two of six drug investigator positions vacant (a 33 percent vacancy rate). In our December 2019 testimony, we reported that FDA had taken steps to address vacancies in the foreign offices but continued to face challenges. In our 2010 report, we recommended that FDA develop a strategic workforce plan to help recruit and retain foreign office staff. FDA agreed with our recommendation and released such a plan in March 2016, but the long-standing vacancies in the foreign offices raise questions about its implementation. FDA officials told us that one challenge in recruiting investigators for the foreign offices is that well- qualified investigators for those positions need foreign inspection experience. For example, an official in FDA’s India office told us that foreign inspections can be challenging, and the India office does not have the resources to develop or train new investigators. Therefore, it is important to recruit investigators who have experience conducting foreign inspections, and such investigators are recruited from ORA. Thus, vacancies in the other two groups of investigators can influence the number of staff available to apply for positions in the foreign offices. Further, according to FDA officials, after employees have accepted an in- country position, the agency can experience significant delays before they are staffed in the office due to delays in processing assignments. For example, an official in FDA’s India office said that investigators need to complete a week-long security training program and must obtain the security clearance needed to work at the U.S. Embassy, which is where FDA’s foreign office is located. However, the official told us that there are limited availabilities for that training, and background checks for security clearances can take time. According to this official, FDA investigators did not usually receive first priority for the training. FDA estimated that it can take as little as 1 month to over 2 years for an investigator to clear background and medical checks and arrive at a foreign office. For example, an investigator in FDA’s China office told us that as a result of these requirements and other issues, it took nearly 2 years for the investigator to arrive at the office after FDA had accepted the investigator’s application. According to FDA’s own strategic workforce plan for the foreign offices, these types of delays have resulted in staff changing their decision after accepting a position in the foreign offices. Persistent Challenges Unique to Foreign Inspections Raised Questions about Their Equivalence to Domestic Inspections In December 2019, we found that FDA continues to face unique challenges when inspecting foreign drug establishments that raise questions about whether these inspections are equivalent to domestic inspections. Specifically, based on our interviews with drug investigators in the dedicated foreign drug cadre and in FDA’s foreign offices in China and India, we identified four challenge areas related to conducting foreign inspections, which are described below. Of the four challenge areas identified, three areas—preannouncing inspections, language barriers, and lack of flexibility—were also raised in our 2008 report. Preannouncing Inspections. As we reported in 2008, the amount of notice FDA generally gives to foreign drug establishments in advance of an inspection is different than for domestic establishments. Drug establishment inspections performed in the United States are almost always unannounced, whereas foreign establishments generally receive advance notice of an FDA inspection. According to FDA officials, FDA is not required to preannounce foreign inspections. However, they said the agency generally does so to avoid wasting agency resources, obtain the establishment’s assistance to make travel arrangements, and ensure the safety of investigators when traveling in country. In our December 2019 testimony, we found that FDA does conduct some unannounced foreign inspections, particularly if the investigators conducting the inspection are based in FDA’s foreign offices. However, FDA officials told us that FDA does not have data on the frequency with which foreign drug inspections are unannounced, nor the extent to which the amount of notice provided to foreign establishments varies. According to FDA officials, this is because FDA does not have a data field in its database to systematically track this information. However, the officials estimated that the agency generally gives 12 weeks of notice to establishments that investigators are coming when investigators are traveling from the United States. While investigators in FDA’s China and India offices do conduct unannounced or short-notice inspections, these staff do not perform most of the inspections in these countries. (See table 3.) Our work indicated that preannouncing foreign inspections can create challenges and raises questions about the equivalence to domestic inspections. Of the 18 investigators we interviewed, 14 said that there are downsides to preannouncing foreign inspections, particularly that providing advance notice gives foreign establishments the opportunity to fix problems before the investigator arrives. For example, when an inspection is preannounced, it gives establishments time to clean up their facility and update or generate new operating procedures ahead of the inspection. However, establishments are expected to be in a constant state of compliance and always ready for an FDA inspection, and several investigators told us seeing the true day-to-day operating environment for an establishment is more likely during an unannounced inspection. Of the 18 investigators we interviewed for our December 2019 testimony, 12 said that unannounced inspections are generally preferable to preannounced inspections. One investigator told us that, although they believed the best way to ensure industry compliance to CGMPs was for establishments to not know when FDA is coming for an inspection, there was no data that would allow the agency to evaluate whether unannounced inspections were better than preannounced inspections. In addition, some investigators told us that it was still possible to identify serious deficiencies during preannounced inspections. For example, investigators could still identify issues by looking at the firm’s electronic records, including time-stamped data relating to the creation, modification, or deletion of a record. Three investigators also told us that in some cases there could be benefits to announcing inspections in advance. For example, for preapproval inspections, announcing the inspection in advance gives the establishment time to organize the documentation and staff needed to conduct the inspection. Language Barriers. Work for our December 2019 testimony indicated that language barriers—which we first reported as a challenge to conducting foreign inspections in our 2008 report—can add time to inspections and raise questions about the accuracy of information FDA investigators collect and thus about the equivalence to domestic inspections. FDA generally does not send translators on inspections in foreign countries. Rather, investigators rely on the drug establishment to provide translation services, which can be an English-speaking employee of the establishment being inspected, an external translator hired by the establishment, or an English-speaking consultant hired by the establishment. Of the 18 investigators that we interviewed, 14 said that language barriers can be a challenge to conducting foreign inspections and were especially challenging in parts of Asia, including China and Japan. Seven investigators told us this issue was less of a challenge for inspections conducted in other foreign countries, including India and countries in Europe, because workers at establishments in these countries were more likely to speak English, and documentation was also more likely to be in English. Investigators told us that compared to domestic inspections, it can be more challenging and take longer to complete typical inspection- related activities, such as reviewing documentation or interviewing employees, if the investigator needed to rely on translation. Fourteen of the 18 investigators we interviewed said that there can be concerns related to relying on establishment staff and independent translators. Specifically, 11 investigators told us there can be uncertainties regarding the accuracy of the information being translated, particularly when investigators rely on the translation provided by an employee of the establishment being inspected. For instance, one investigator said that there was more risk of conflict of interest if the establishment used its own employees to translate. Another investigator said that they went to a drug establishment in China that told FDA it had English-speaking employees to translate the inspection, but that was not the case, and the investigator had to use an application on their phone to translate the interviews. In addition, the firm representative providing the translation may be someone who does not have the technical language needed, which can make it harder to communicate with firm staff and facilitate the inspection. One investigator told us that the independent translators hired by firms were sometimes consultants and, in those instances, it can seem like the consultants are coaching the firm during the inspection. FDA officials told us that when they conduct unannounced for-cause inspections in China, investigators bring locally employed staff who work in FDA’s China office to act as translators. The investigators we interviewed said that in such instances, they valued knowing that the translation they were getting was accurate. However, FDA does not have the resources to provide locally employed staff on every inspection, according to an FDA official. Lack of Flexibility. Work for our December 2019 testimony indicated that, as we first reported in 2008, the overseas travel schedule can present unique challenges for FDA’s domestically based investigators— including both ORA investigators and members of the dedicated foreign dug cadre—who conduct the majority of foreign inspections. Eight of the 12 dedicated foreign drug cadre investigators that we interviewed for our December 2019 testimony told us that there is little flexibility to extend foreign inspections conducted by domestically based investigators, because the inspections they conduct on an overseas trip are scheduled back-to-back in 3-week trips that may involve three different countries. This raises questions about their equivalence to domestic inspections. For instance, extending one inspection would limit the amount of time the investigator has to complete their other scheduled inspections, some investigators told us. In addition, eight investigators told us that domestically based staff are generally unable to extend the total amount of time spent on an overseas trip—one investigator told us that an investigator would have to find something really bad to justify an extension. In contrast, FDA officials told us that inspections conducted by in-country investigators in China or India, and domestic inspections in the United States, are generally scheduled one at a time and can thus more easily be extended if the investigator needs additional time to pursue potential deficiencies. However, in-country investigators are not involved in the majority of inspections conducted in China or India. Three investigators from the dedicated foreign drug cadre told us that when they travel overseas, they adjust their inspection approach to help ensure they finish foreign inspections on time. For example, one investigator told us that an investigator may start the inspection in an area of the establishment that was noted as having issues during the last inspection. However, one investigator said that sometimes it is not possible to cover everything in depth during a foreign inspection. Another investigator told us that they focus on identifying the most serious issues during a foreign inspection, and that less serious issues can be identified in the establishment inspection report for reference in the next inspection. Five investigators also noted that they work long hours during their inspection to ensure they can complete the needed work. While FDA may assign more than one investigator to an inspection to complete needed work, one investigator said that FDA does not usually assign more than one person to an inspection because investigators are expected to have the experience to conduct inspections by themselves. FDA data show that from fiscal years 2012 through 2018, the majority of both foreign and domestic inspections were conducted by one person— 77 percent and 66 percent, respectively. Post-Inspection Classification Process. According to FDA officials, starting in fiscal year 2018, FDA implemented a new post-inspection classification process: when an ORA investigator recommends an OAI classification following an inspection, ORA compliance is required to send that inspection report to CDER for review within 45 calendar days from the inspection closeout. Among other things, the process was intended to help ensure FDA can communicate inspection results to domestic and foreign establishments within 90 days of the inspection closeout, as committed to under the Generic Drug User Fee Amendments of 2017(GDUFA II). FDA officials told us that the changes also required an additional ORA review for foreign inspection reports to align that process with the process for domestic inspection reports. Although the 45-day reporting time frame for potential OAI classifications is a requirement for both domestic and foreign inspections, adding the additional level of review within ORA effectively shortened the amount of time investigators have to document findings for foreign inspections. Our work indicated that the post-inspection reporting time frames can create challenges for domestic investigators who conduct foreign inspections and raise questions about the equivalence to domestic inspections. Eight of the 18 investigators we interviewed for our December 2019 testimony said shortening the time for completing reports and adding a level of review has made it more challenging to meet reporting requirements, especially if serious deficiencies are identified during the inspection. Investigators told us that for a potential OAI inspection, they now need to send the inspection report to their supervisor for endorsement within 10 days of the closeout of a foreign inspection, regardless of when the investigator’s next inspection is scheduled for, or whether the investigator has to travel from overseas back to the United States after the inspection. For example, if a domestic investigator finds serious deficiencies on the first inspection of an overseas trip—thus indicating an initial OAI classification—the investigator needs to write and send the related inspection report to the ORA supervisor for endorsement before returning home from the 3-week overseas trip to meet the required time frame. One investigator told us that, as a result of the time pressures, post-inspection reports may be less thorough, and that some inspection observations could be better supported if investigators had more time to write the reports. In conclusion, foreign manufacturing establishments continue to be a critical source of drugs for millions of Americans, and FDA inspections are a key tool to ensure the quality of these drugs. Over the years since we first examined this issue, FDA has made significant changes to adapt to the globalization of the pharmaceutical supply chain and has greatly increased the number of inspections it conducts of foreign establishments. However, we found in December 2019 that the agency faced many of the same challenges overseeing foreign establishments that we identified over the last two decades. These included inspector vacancies and unique challenges when inspecting foreign drug establishments that raised questions about the equivalence of those inspections to domestic inspections. Since then, the outbreak of COVID- 19 has added a layer of complexity. It also further highlights the global nature of our pharmaceutical supply chain. Chairman Grassley, Ranking Member Wyden, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Mary Denigan-Macauley, Director, Health Care at (202) 512-7114 or DeniganMacauleyM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are William Hadley (Assistant Director); Derry Henrick (Analyst-in- Charge); Katherine L. Amoroso; George Bogart; Zhi Boon; Rebecca Hendrickson; John Lalomio; Gail-Lynn Michel; Laurie Pachter; and Vikki Porter. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The outbreak of COVID-19 has called greater attention to the United States' reliance on foreign drug manufacturers and further highlighted the importance of ensuring a safe pharmaceutical supply chain. Much of the manufacturing of drugs for treating COVID-19 occurs overseas, which is also true of the majority of other drugs marketed in the United States. While the volume of drugs manufactured overseas for the U.S. market is not fully known, FDA reports that about 70 percent of establishments manufacturing active ingredients and more than 50 percent of establishments manufacturing finished drugs for the U.S. market were located overseas, as of August 2019. FDA is responsible for overseeing the safety and effectiveness of all drugs marketed in the United States, regardless of where they are produced, and conducts inspections of both foreign and domestic drug manufacturing establishments. GAO has had long-standing concerns about FDA's ability to oversee the increasingly global pharmaceutical supply chain, an issue highlighted in GAO's High Risk Series since 2009. In particular: GAO recommended in 2008 ( GAO-08-970 ) that FDA increase the number of inspections of foreign drug establishments. GAO found in 2010 ( GAO-10-961 ) that FDA continued to conduct relatively few foreign inspections than domestic inspections. GAO found in 2016 ( GAO-17-143 ) that FDA was conducting more of these foreign drug inspections, and GAO closed its 2008 recommendation to conduct more foreign inspections. However, GAO also reported that FDA may have never inspected many foreign establishments manufacturing drugs for the U.S. market. In addition, in the summer of 2018, FDA began announcing recalls of blood pressure medications manufactured overseas that were tainted with a potential carcinogen, raising further questions about FDA’s oversight of foreign-manufactured drugs. This statement is largely based on GAO’s December 2019 testimony ( GAO-20-262T ) and discusses 1. the number of foreign inspections FDA has conducted, 2. inspection staffing levels, and 3. challenges unique to foreign inspections. For that testimony, GAO examined FDA data from fiscal years 2012 through 2018 and interviewed investigators from FDA’s 2019 cadre of investigators (who are based in the United States but exclusively conduct foreign drug inspections) and from FDA’s foreign offices in China and India. What GAO Found In December 2019, GAO found that a growing number of foreign drug manufacturing inspections conducted by the Food and Drug Administration (FDA) were in China and India (43 percent in 2018), where most establishments that manufacture drugs for the United States were located. In fiscal year 2015, FDA, for the first time, conducted more foreign inspections than domestic inspections. However, from fiscal year 2016 through 2018, both foreign and domestic inspections decreased—by about 10 percent and 13 percent, respectively. FDA officials attributed the decline, in part, to vacancies among investigators available to conduct inspections. In March 2020, FDA announced that, due to Coronavirus Disease 2019 (COVID-19), it was postponing almost all inspections of foreign manufacturing establishments. While FDA has indicated it has other tools to ensure the safety of the U.S. drug supply, the lack of foreign inspections removes a critical source of information about the quality of drugs manufactured for the U.S. market. GAO also found that FDA had vacancies among each of the groups of investigators who conduct foreign inspections. FDA had 190 investigators in the United States who conduct the majority of foreign inspections, but an additional 58 positions were vacant. At the time of GAO's December 2019 testimony, FDA was in the process filling 26 of these vacancies, with 32 remaining. However, according to FDA officials, it could be 2 to 3 years before new staff are experienced enough to conduct foreign inspections. FDA also faced persistent vacancies among investigators in its foreign offices. GAO further found in December 2019 that FDA investigators identified persistent challenges conducting foreign inspections, raising questions about the equivalence of foreign to domestic inspections. Specifically, GAO found: While FDA inspections performed in the United States were almost always unannounced, FDA's practice of preannouncing foreign inspections up to 12 weeks in advance may have given manufacturers the opportunity to fix problems ahead of the inspection. Investigators from FDA's China and India offices had conducted some unannounced inspections, but these staff do not perform most of the inspections in these countries (27 percent and 10 percent, respectively). FDA was not generally providing translators on foreign inspections. Rather, FDA continued to rely on translators provided by the foreign establishments being inspected, which investigators said raised questions about the accuracy of information FDA investigators collected. For example, one investigator said there was more risk of conflict of interest if the establishment used its own employees to translate. In addition, the establishment representative providing the translation may be someone who does not have the technical language needed, which can make it harder to communicate with establishment staff and facilitate the inspection. The overseas travel schedule can present challenges for FDA's domestically based investigators, who conduct the majority of foreign inspections. Domestically based investigators told us there is little flexibility for them to extend foreign inspections during an overseas trip. The inspections they conduct on an overseas trip are scheduled back-to-back in 3-week trips and may involve three different countries. Therefore, extending one inspection would limit the amount of time the investigator has to complete their other scheduled inspections. FDA officials said that inspections conducted by investigators based in China or India (and domestic inspections in the United States) are generally scheduled one at a time and can thus more easily be extended if the investigator needs additional time to pursue potential deficiencies. However, these in-country investigators are not involved in the majority of FDA inspections conducted in China or India.
gao_GAO-19-592
gao_GAO-19-592_0
Background Federal Roles and Responsibilities in Responding to Disasters Under the National Response Framework, the Department of Homeland Security is the federal department with primary responsibility for coordinating disaster response, and within the department, FEMA has lead responsibility. Due to the massive response needed after Hurricanes Irma and Maria in the U.S. Virgin Islands and Puerto Rico, FEMA utilized the National Response Framework to activate all 14 ESFs, including ESF#8. The National Response Framework designates state, local, tribal, and territorial agencies as primarily responsible for response activities in their jurisdictions, including those related to public health and medical services. However, when effective disaster response is beyond the capabilities of the state, territorial, or tribal government and affected local governments, such as was the case for Hurricanes Irma and Maria, those governments can request federal assistance. The federal response for a specific ESF is designed to supplement the state, local, tribal, or territorial resources that respond to a disaster or other emergency. However, due to the physical destruction of the two hurricanes in the U.S. Virgin Islands and Puerto Rico, the territorial government agencies that were tasked with coordinating resources to respond to such disasters were largely incapacitated. This resulted in an unprecedented federal role in the response to these disasters. ASPR’s Role in Responding to Disasters As the lead agency for an ESF#8 response, ASPR is responsible for coordinating the ESF#8 response core capabilities outlined in the National Response Framework. These core capabilities include assessment of public health and medical needs, patient evacuation, patient care, the provision of medical equipment and supplies, and public health communication, among others. ASPR coordinates these core capabilities through two main roles defined in the National Response Framework—the coordinator and the primary agency. As the coordinator, ASPR oversees and coordinates the preparedness activities for ESF#8 support agencies, nongovernmental organizations, and the private sector. For example, ASPR must maintain contact with support agencies through conference calls, training, and other activities, prior to events; monitor the ESF’s progress in being able to meet the outlined core capabilities; as well as coordinate planning and preparedness efforts with nongovernmental organizations and the private sector. As the primary agency, ASPR has significant authorities, roles, resources, and capabilities to fulfill during an ESF#8 response. Its responsibilities include notifying and requesting assistance from support agencies and coordinating resources, as well as working with all types of organizations, such as ESF#8 support agencies, territory officials, and other stakeholders to maximize the use of all available resources. As part of a response, ASPR may activate the National Disaster Medical System (NDMS)—an interagency partnership among HHS, DOD, VA, and the Department of Homeland Security to supplement health and medical systems and response capabilities during a public health emergency. Under NDMS, ASPR and its partner agencies provide medical response (by deploying medical personnel teams, for example), evacuate patients, and provide medical care in NDMS medical facilities when requested by state, local, tribal, and territorial governments or other federal agencies. For example, as part of NDMS, DOD and FEMA may provide transportation to evacuate seriously ill or injured inpatients. DOD and VA may operate and staff NDMS Federal Coordinating Centers, which are activated during an emergency to receive, triage, stage, track, and transport patients affected by a disaster or national emergency to a participating NDMS medical facility capable of providing the required care to manage the patient’s condition. After an ESF#8 response, ASPR evaluates HHS’s disaster response activities through an after-action review. According to the Department of Homeland Security’s Homeland Security Exercise and Evaluation Program guidance, which ASPR follows, this review should include collecting feedback about the response activities to identify strengths and areas for improvement, and developing corrective actions to address identified areas for improvement. This information is then documented in an after-action report and corrective action improvement plan. Population Demographics and Hospital Systems in the U.S. Virgin Islands and Puerto Rico The populations in the U.S. Virgin Islands and Puerto Rico are older than the general U.S. population. Estimates indicate that the total population in the U.S. Virgin Islands in 2018 was approximately 107,000 and about 18 percent (or about 19,000 individuals) were age 65 or older. Estimates for Puerto Rico indicate that the total population in 2018 was approximately 3.3 million and about 20 percent (or about 666,000 individuals) were age 65 or older. In comparison, almost 16 percent of the general population in the 50 states and the District of Columbia, totaling approximately 329.3 million, were age 65 or older in 2018. To serve these populations, the U.S. Virgin Islands has two hospitals, one on St. Thomas and one on St. Croix, each with a capacity of 150 beds. Puerto Rico has 68 hospitals scattered throughout the island. The capacity of beds ranges from less than 10 to 515, with a total of almost 10,000 hospital beds to serve the territory. Hurricanes Irma and Maria and Their Effects on the U.S. Virgin Islands and Puerto Rico The 2017 Atlantic Hurricane season was one of the most active seasons in U.S. history, causing widespread damage and destruction to significant populations in the continental United States and the territories. In particular, two hurricanes—Irma and Maria—struck in quick succession and devastated the U.S. Virgin Islands and Puerto Rico. Hurricane Irma – a category 5 storm passed by the U.S. Virgin Islands—St. Thomas and St. John—on September 6, and continued by Puerto Rico. In the U.S. Virgin Islands, the storm caused high storm surge, flooding, extensive damage to buildings and infrastructure, and widespread power outages. It became one of the strongest Atlantic hurricanes on record. Hurricane Maria – a category 5 storm passed by the U.S. Virgin Islands—St. Croix—on September 20, and made landfall in Puerto Rico as a category 4 storm. Hurricane Maria compounded the damage caused by Hurricane Irma in the U.S. Virgin Islands, and devastated Puerto Rico. Heavy flooding and high winds led to the catastrophic damage to Puerto Rico’s power grid, as well as severe damage to the water, communications, transportation, and health care infrastructure. The majority of Puerto Rico’s power grid was down for nearly two months following Hurricane Maria, with outages continuing through 2018. Figure 1 depicts the paths of Hurricanes Irma and Maria. Figure 2 contains photographs of damage sustained in the U.S. Virgin Islands. Figure 3 contains photographs of damage sustained in Puerto Rico. Additional 2017 Hurricanes Requiring an ASPR Response At the same time ASPR was responding to the catastrophic hurricanes in the U.S. Virgin Islands and Puerto Rico, the agency was also responding, or had recently responded, to hurricanes in other areas. Specifically, ASPR led the ESF#8 response to Hurricane Harvey, a category 4 hurricane that made landfall in Texas on August 25, 2017. Further, in addition to responding to the effects of Hurricane Irma on the U.S. Virgin Islands, ASPR was leading the response to that hurricane in Florida. Also, while ASPR was still responding to Hurricanes Irma and Maria, Hurricane Nate, a category 1 hurricane, hit Louisiana and Mississippi on October 7 and 8, 2017, respectively. While not as severe as the prior hurricanes, Hurricane Nate resulted in wind damage, flooding, and storm surge, and required a public health and medical services response. (See figure 4 for a timeline of the 2017 hurricanes requiring ASPR to lead an ESF#8 response.) ASPR and Support Agencies Evacuated Patients and Deployed Medical Staff and Facilities to the U.S. Virgin Islands and Puerto Rico ASPR and support agencies evacuated critical care and dialysis patients and deployed medical staff and temporary medical facilities as part of the response to Hurricanes Irma and Maria. These activities centered on saving lives and preventing human suffering. Evacuations of Critical Care and Dialysis Patients During the response to Hurricanes Irma and Maria, ASPR led the NDMS evacuation of critical care and dialysis patients. According to ASPR officials, Hurricane Irma damaged critical health care infrastructure and created a deteriorating situation in St. Thomas that necessitated life-saving evacuations to Puerto Rico, particularly as St. Croix’s health care facilities could not support the needs of both islands. Specifically, after Hurricane Irma damaged the only hospital on St. Thomas, ASPR prioritized evacuating critical care patients to Puerto Rico. Once ASPR officials further determined that St. Thomas did not have the capacity to treat dialysis patients, ASPR also coordinated the movement of dialysis patients to Puerto Rico. This was the first time ASPR had coordinated the evacuation of such patients during an ESF#8 response. ASPR used HHS’s Centers for Medicare and Medicaid Services’ data to locate dialysis patients on St. Thomas who were unable to be reached by local authorities for evacuation. As the threat of Hurricane Maria making landfall in Puerto Rico became evident, ASPR began moving U.S. Virgin Islands patients previously evacuated to Puerto Rico to the continental United States, according to ASPR and Department of Interior documentation. See figure 5 for a timeline of patient evacuations conducted through NDMS from the U.S. Virgin Islands and Puerto Rico after Hurricanes Irma and Maria. ASPR worked with other agencies to evacuate NDMS patients. Specifically, ASPR relied on DOD to provide transportation because HHS did not have its own transportation capabilities. For example, DOD provided personnel and transportation to conduct aeromedical evacuations of patients from the U.S. Virgin Islands to Puerto Rico and the continental United States. In addition, DOD operated a Federal Coordinating Center in the continental United States, and VA operated Federal Coordinating Centers in Puerto Rico and the continental United States to receive evacuated patients and place them into NDMS medical facilities. For example, the day after Hurricane Irma passed the U.S. Virgin Islands, ASPR requested that VA operate the San Juan Federal Coordinating Center to begin receiving evacuated U.S. Virgin Islands patients. See figure 6 for a photograph of NDMS evacuation of U.S. Virgin Islands dialysis patients to the continental United States. Deployment of Medical Staff and Temporary Facilities During the response to Hurricanes Irma and Maria, ASPR and some of its ESF#8 support agencies—DOD and VA—deployed medical staff and temporary medical facilities to respond to the public health and medical needs in the U.S. Virgin Islands and Puerto Rico. Using these medical assets, ASPR and its support agencies served almost 16,000 patients in Puerto Rico and almost 2,000 patients in the U.S. Virgin Islands over the course of about four weeks after Hurricane Maria, according to ASPR reports. Examples of ASPR medical staff and facilities include, but are not limited to, the following: Disaster Medical Assistance Teams. ASPR placed Disaster Medical Assistance Teams in front of the major hospitals in the U.S. Virgin Islands and Puerto Rico to triage patients and to relieve the hospitals’ emergency departments by treating patients with acute care needs during the response to Hurricanes Irma and Maria. Disaster Medical Assistance Teams comprise about 35 medically trained personnel and equipment. In addition, Disaster Medical Assistance Teams were sometimes divided into six-person teams—known as Health Medical Taskforce Teams—that are more agile, according to ASPR officials. These smaller teams supported response operations in the U.S. Virgin Islands and Puerto Rico by traveling into hard–to-reach places to provide acute medical care, stabilize patients, and call for the transport of patients, when needed. According to ASPR officials, ASPR deployed a Disaster Medical Assistance Team to Puerto Rico prior to Hurricane Maria making landfall and then divided it into smaller teams to provide medical care around San Juan, Puerto Rico. According to these officials, HHS was one of the few federal agencies to have operational personnel available immediately post landfall. See figure 7 for photographs of Disaster Medical Assistance Teams setting up and providing services in Puerto Rico. Federal Medical Stations. ASPR placed Federal Medical Stations in tents in front of hospitals in Puerto Rico after Hurricane Maria made landfall to assist with relieving the hospitals’ emergency departments. Federal Medical Stations are to have a 3-day supply of medical and pharmaceutical resources to sustain up to 250 stable, primary, or chronic care patients. Because the entire island of Puerto Rico was affected by Hurricane Maria, ASPR implemented a “hub and spoke” strategy for the first time—a system to deliver medical care over affected areas’ population centers—according to ASPR officials. Under this strategy, ASPR designated San Juan’s Centro Medico hospital as the “hub” of activity with six “spokes” delivering care to the island’s population centers, and placed Federal Medical Stations in tents in front of each hospital, including the “hub.” USNS Comfort Deployed to Puerto Rico to Respond to Hurricane Maria The USNS Comfort is a seagoing medical treatment facility that had more than 850 medical and support staff embarked as part of the public health and medical services response to Hurricane Maria in Puerto Rico, according to Department of Defense (DOD) officials. DOD officials stated that approximately 2,000 patients in Puerto Rico were provided care on the USNS Comfort during the course of its 45-day relief mission that began in early October 2017. The USNS Comfort’s primary mission is to provide an afloat, mobile, medical–surgical facility to the U.S. military that is flexible, capable, and uniquely adaptable to support expeditionary warfare. The ship’s secondary mission is to provide full hospital services to support U.S. disaster relief and humanitarian operations worldwide. Medical Companies provided trauma, medical, and surgical care to populations in Puerto Rico after Hurricane Maria. Among other medical facilities, DOD also provided a Combat Support Hospital to Puerto Rico 3 weeks following Hurricane Maria—which consisted of 44 beds with emergency medical technicians; an operating room, laboratory, pharmacy, and X-ray machine; and primary care and intensive care capabilities. DOD also sent the USNS Comfort—a hospital ship maintained by the U.S. Navy that served as a mobile, floating hospital—to help relieve the hospitals in Puerto Rico. VA medical staff. VA deployed medical personnel through its Disaster Emergency Medical Personnel System—VA’s main deployment program for clinical and non-clinical staff to an emergency or disaster—to assist ASPR with staffing the Federal Medical Stations. According to VA officials, these personnel worked side by side with other federal personnel, such as Disaster Medical Assistance Teams, to provide medical assistance. Hurricanes Irma and Maria Highlighted Key Deficiencies in ASPR’s Emergency Response Leadership Our review identified several key deficiencies with ASPR’s leadership of the federal public health and medical services response to Hurricanes Irma and Maria in the U.S. Virgin Islands and Puerto Rico that could adversely affect future large-scale responses unless they are addressed. Limited ASPR presence in the U.S. Virgin Islands. As the primary agency, ASPR is responsible for coordinating the ESF#8 response, including coordinating with support agencies and officials at operations centers. Further, FEMA’s ESF#8 statement of work for ASPR states that HHS should provide appropriate personnel at emergency operations centers near disaster sites to lead an ESF#8 response. HHS officials maintained that the Department is not required to address all capabilities in the ESF#8 statement of work, as the actual response provided by HHS depends on other factors, such as resource availability. Emergency Operations Center An emergency operations center is a physical location where responders, including federal and state/territory responders, as well as nongovernmental responders, can meet to coordinate information and resources to support incident management (on-scene operations) during a response. According to Department of Homeland Security documentation, decision makers gather at emergency operations centers to ensure they receive the most current information, which allows for improved communication and decision-making during a response. During the initial weeks after the hurricanes, ASPR liaison officers were not always stationed at the emergency operations centers in St. Thomas and St. Croix. Instead, the liaisons rotated between the emergency operations center, hospital, and airport on each island to manage patient evacuations, or stayed at the hospital, according to ASPR officials. This led to confusion with regard to the ESF#8 response status on the ground, according to FEMA, DOD, and territory health officials. For example, FEMA officials stated that when they needed information on patients’ health needs and evacuation status, they had to spend time trying to locate an ASPR liaison officer to obtain it. The FEMA officials then had to relay this information to DOD, territory health officials, and hospital representatives who were making numerous requests for this information to FEMA in ASPR’s absence at the centers. FEMA officials stated that relaying medical information was outside their areas of expertise as were other activities they conducted in ASPR’s absence, such as addressing public health issues at shelters. One FEMA official stated that he had to read handwritten notes from the hospital that contained patient information, such as vitals and prescription needs, and provide this information to other responders. Without a medical background, he did not know the meaning of a lot of the medical terms used. Furthermore, these FEMA officials stated that given communication systems were down on the islands, having a reliable, physical presence at the emergency operations centers in St. Thomas and St. Croix became even more critical. After a few weeks into the response, ASPR liaison officers were stationed at emergency operations centers, according to ASPR officials, but the officers generally rotated about every 2 weeks with limited time to hand off information and were often not from Region II. This limited ASPR’s leadership of the response and put undue resource strain on other responders, according to FEMA and territory health officials. For example, according to FEMA and U.S. Virgin Islands health officials, the liaison officer would not necessarily understand the big picture, the tasks to be done, or the players involved. Thus, FEMA and territory health officials would have to take time to bring the ASPR liaison officer up to speed on the pressing public health and medical services issues, and shortly thereafter the officer would leave to be replaced by someone else, who would also need to be brought up to speed. ASPR officials provided two different reasons for the staffing challenges encountered at the emergency operations centers in the U.S. Virgin Islands. First, some ASPR officials cited personnel resource constraints. Specifically, these officials stated that ASPR personnel had already been deployed multiple times, given the prior hurricane (Hurricane Harvey) and concurrent events that ASPR was responding to in multiple locations. As a result, officials said there was not enough time to educate rotating officials on issues faced in the U.S. Virgin Islands and deployments were shorter than ideal. Second, other ASPR officials stated that a lack of transportation from Puerto Rico to the U.S. Virgin Islands may have resulted in minimal overlap of liaison officers. According to these officials, they had to request such transportation from FEMA, and FEMA did not always prioritize their needs, since it was also managing transportation needs from other ESFs. However, FEMA officials contested this statement and stated there was ample opportunity for ASPR liaison officers to get to the U.S. Virgin Islands. In retrospect, ASPR officials acknowledged that staffing emergency operations centers, as well as other strategic locations is ideal. ASPR documentation after the response states that the officers’ presence at emergency operations centers is important because they need to be working at the operational and tactical levels on the ground. In addition to staffing emergency operations centers, ASPR officials agreed with statements from FEMA and DOD officials who told us that the ideal scenario would be to have at least one other liaison officer (if not more) to support the lead liaison officer at all strategic locations. The officials noted that the number of liaison officers may vary depending on the response needs. In the case of patient evacuations, for example, this would include having a liaison officer at the airport and one at the hospital, in addition to the lead at the emergency operations center. In contrast, DOD officials stated that after Hurricane Irma, one ASPR liaison was on St. Croix trying to manage all the ESF#8 activities, including patient evacuations and hospital assessments, which was too much for one person. In May 2019, ASPR officials told us they have a long-term goal of creating an incident response team that will comprise 17 full-time response personnel. If implemented, this strategy may allow ASPR to provide more liaisons on the ground during a response and address the staffing deficiency we identified. However, ASPR officials did not provide us with a draft strategy or a timeline for the creation of such a team. Until ASPR develops a response personnel strategy to ensure it has sufficient liaison officers available to consistently lead a response from emergency operations centers and other strategic locations, the agency risks repeating the challenge encountered in the U.S. Virgin Islands—notably, a situation with inadequate liaison officer presence to effectively lead a response on the ground. Delay in tracking evacuated patients. Tracking NDMS evacuated patients and ensuring their care is a critical component of the public health and medical services response. The ESF#8 Annex of the National Response Framework states that patients should be tracked from their point of entry into NDMS. However, our review found that ASPR did not track patients evacuated through NDMS from the U.S. Virgin Islands to Puerto Rico immediately after Hurricane Irma. This occurred because of delays in getting HHS tracking personnel to the territories, according to VA documentation, as well as ASPR, DOD, VA, FEMA, and U.S. Virgin Islands Department of Health officials. Specifically, HHS teams that track patients were not deployed to the region until about 5 days after patients were already being evacuated through NDMS. These teams are (1) Joint Patient Assessment and Tracking System (JPATS) teams, which enter patient information into JPATS—ASPR’s tracking system—and (2) service access teams, which track and monitor the status of evacuated patients, including facilitating movement to home or other final destination after being discharged from care. As a result of the delayed deployment of the tracking teams, ASPR officials did not initially know the locations of some NDMS evacuated patients in Puerto Rico. For example, once in Puerto Rico, the service access teams had to drive around the territory looking for evacuees, according to ASPR officials. ASPR officials explained that there was a delay in tracking patients after Hurricane Irma because it takes time for JPATS and service access teams to deploy to a region. ASPR officials told us that they did not pre-deploy the tracking teams before the hurricane, because the U.S. Virgin Islands officials did not request ASPR’s help with patient evacuations until after Hurricane Irma hit. ASPR officials also stated that at the time of the hurricanes, the agency had no policy for tracking patients from the start of NDMS evacuations; however, since the hurricanes, the agency has developed a federal patient movement framework that may help prevent future delays in patient tracking. This framework describes the pre-deployment of JPATS and service access teams, which would allow for tracking to start at the beginning of NDMS evacuations. ASPR officials told us this is the optimal solution. However, during an event such as a hurricane, sufficient notice for pre-deployment is not always possible. One option identified in ASPR’s federal patient movement framework is for FEMA to track patients initially and share these data with ASPR and for DOD to provide patient movement manifests to ASPR so that the data can be manually entered into JPATS once deployed, which will contain the overall dataset for patient tracking. By working with DOD and FEMA, ASPR may be able to consistently track patients from the start of evacuations even when there is a deployment delay in HHS’s own tracking capabilities. While ASPR’s development of the framework is an important step forward to address delays in patient tracking, ASPR has not exercised the framework with its NDMS partners to ensure it is sufficient and reliable. For example, given the potential need to manually enter information into JPATS, there could still be a delay in HHS knowing where patients are located and being able to inform family members. An exercise of the framework could help determine if this is indeed a concern that needs to be addressed. We have previously reported that exercises are a key tool for testing and evaluating preparedness. ASPR officials told us that exercising the framework prior to the next hurricane season had been discussed, but as of May 2019, nothing had been scheduled. Without a framework that has been exercised with the other agencies involved in federal patient movement and tracking, ASPR risks delays in patient tracking when conducting future NDMS patient evacuations. Final status of one-fourth of evacuated patients not readily available. The ESF#8 Annex of the National Response Framework states that NDMS evacuated patients should be tracked to their final disposition. Further, federal internal controls standards stress the importance of information controls to ensure quality information is used to achieve objectives, which includes information that is complete and accurate. However, we found that of the approximately 800 NDMS patient evacuations during the response to Hurricanes Irma and Maria, the agency could not readily provide us with the final status of approximately 200 of these patients. ASPR officials stated they did not have information indicating the final status of the 200 evacuated patients, because case workers are not required to report this information to ASPR. ASPR officials explained that the case workers on the service access teams deployed during the response are responsible for keeping track of patients’ final status. However, we found that without conducting a review of files in which the case workers recorded patients’ final status, ASPR officials could not determine if the patients were appropriately discharged and returned back to the U.S. Virgin Islands, left the system against medical advice, or were otherwise unaccounted for. Additionally, as of June 2019, ASPR did not provide documentation indicating the steps the agency takes to ensure the data held by case workers are accurate. Until ASPR has controls in place to ensure that data on NDMS evacuated patients are complete and accurate, the agency cannot ensure it is sufficiently tracking all NDMS evacuated patients and risks losing track of patients when conducting future patient evacuation efforts. Limited focus on chronic and primary care needs in isolated locations. As the coordinator, ASPR is responsible for ensuring that appropriate planning and preparedness activities are undertaken. This includes planning for the care of elderly and chronically ill patients in isolated areas. Our review found that at the time of the hurricanes, ASPR Region II’s response plans for the U.S. Virgin Islands and Puerto Rico—known as Incident Response Plans—did not account for the need for chronic and primary care in isolated communities. This type of care was greatly needed, given that many people, especially the elderly, could not easily access hospitals, according to officials from ASPR, DOD, the Puerto Rico Department of Health, and three stakeholders we interviewed. Consistent with the views of these officials, the HHS Deputy Inspector General reported that during Hurricane Maria, hundreds of patients across Puerto Rico sought access to urgent care, primary care, and pharmacy services at community-based health care centers, known as Federally Qualified Health Centers, because they could not travel to hospitals for treatment. Further, we reported in May 2019 and heard from two stakeholders that because of the widespread power outages and infrastructure damage in both territories, the chronically ill often did not have access to electricity to power their medical devices—such as ventilators—and gasoline to run generators was scarce. ASPR’s initial response activities—which generally focused on supporting the hospitals and patients with acute care needs—were based on response plans with assumptions that did not hold true given the unprecedented level of destruction in the areas. Specifically, according to ASPR officials, the agency focused its response planning on managing the surge of patients at hospitals, assuming that individuals would make their way to hospitals, and projecting that smaller communities could care for one another until further needs assessments could be conducted. For example, ASPR Region II and Puerto Rico health officials assumed in their planning that patients in the harder to reach areas, such as the mountainous areas, would make their way to the coast where hospital care was available, according to ASPR officials. ASPR officials also stated that preparedness planning for an immediate response is generally focused on managing the surge of patients at hospitals, with the assumption that after about a week into the response, assessments would be conducted to determine other needs, such as chronic care needs. However, ASPR officials told us that in retrospect, the planning and the assumptions used for planning for the U.S. Virgin Islands and Puerto Rico were not adequate given the unprecedented level of destruction in the areas, which affected communications and transportation. FEMA officials also said that given how difficult it was to assess the situation in Puerto Rico after Hurricane Maria, having prior knowledge of the situation on the ground that could affect the response (such as the general public health and medical needs in the territories during non-disaster times) was a lesson learned that applies to them, as well as ASPR. ASPR has taken steps to better account for the need for chronic and primary care in isolated communities in future public health and medical services responses. However, these efforts have not been finalized or incorporated into ASPR Region II Incident Response Plans for the territories, which according to a lead HHS Region II official, are internal agency plans that serve as a playbook for HHS officials during an ESF#8 response in these territories. Specifically, ASPR is working with the Puerto Rico Department of Health officials to map the locations of health care facilities in Puerto Rico—such as clinics, Federally Qualified Health Centers, urgent care centers, and hospitals—including their bed, generator, communication, and surge capacities. This is the first time all such information has been brought together, and ASPR continues to work on this effort as it helps the territory recover, according to agency documentation. ASPR officials also told us that moving forward they would like to involve Federally Qualified Health Centers in planning and response activities, including involving them in the provision of primary care during responses. We agree that these are important steps that ASPR can take to address this deficiency. However, until ASPR Region II Incident Response Plans for the territories include the provision of chronic and primary care in isolated communities, there is a risk that disaster survivors will not receive needed care. For example, this could include the incorporation of Federally Qualified Health Centers or other local health clinics into these plans. Misalignment of support agencies’ capabilities to response needs. As the coordinator, ASPR is responsible for ensuring that appropriate planning and preparedness activities are undertaken, including monitoring the progress in meeting the ESF#8 core capabilities. Further, FEMA guidance issued in June 2015 states that each ESF coordinator should maintain a capabilities inventory for the ESF. However, our review found that ASPR did not have a sufficient understanding of ESF#8 support agencies’ capabilities prior to the hurricanes. Consequently, ASPR’s resource needs for the response in the U.S. Virgin Islands and Puerto Rico were not always aligned with the resources its support agencies—DOD, VA, and FEMA—could provide. According to ASPR documentation and DOD officials, this resulted in some deployed resources not being properly and efficiently utilized. As an example of the misalignment of resources, DOD officials told us that, through FEMA, ASPR requested that DOD provide stand-alone medical assistance teams (i.e., teams of medical personnel and equipment, similar to ASPR’s Disaster Medical Assistance Teams) to deliver medical care to the hurricane survivors in the U.S. Virgin Islands and Puerto Rico. However, since DOD does not have stand-alone teams, it deployed Area Support Medical Companies, which included facilities, equipment, and supply packages. These teams are equipped to serve the military population—those approximately 18-60 years of age, wounded, and requiring trauma and medical-surgical care. However, trauma and medical surgical care was not the primary need in the islands, which, in general, have an older population with chronic and primary care needs. ASPR documentation also shows that ASPR had trouble defining how FEMA and DOD assets fit into the overarching ESF#8 response. For example, ASPR documentation states that it took the agency nearly a week to fully realize that the two Area Support Medical Companies provided by DOD were not equivalent to the five stand-alone medical assistance teams that HHS had requested. According to DOD officials, the misalignment of resources during the response was troublesome as the Department’s involvement in the ESF#8 response activities affected patient care for military health beneficiaries and potentially increased overseas contingency response risks for the Department. In another example, during the response, there were conflicting expectations about VA personnel’s role in supporting the Federal Medical Stations, with VA responders thinking they would run shelter operations and ASPR believing the VA staff would support medical operations, according to ASPR documentation. According to ASPR officials, the agency had never anticipated needing— and therefore did not plan for—certain ESF#8 agency support, such as teams similar to ASPR’s Disaster Medical Assistance Teams. ASPR’s role in a response has traditionally been to support states or territories; however, because of the catastrophic nature of the hurricanes, ASPR effectively led the territories in the response as opposed to playing a supporting role. ASPR’s response system was not designed to handle that large of a role, according to officials. Since the hurricanes, ASPR has taken steps to understand the resources available from its support agencies, but ASPR officials agreed that it is an activity that the agency needs to continue to undertake. Specifically, ASPR officials stated that the agency is currently working with its NDMS partners (FEMA, DOD, and VA) to develop memorandums of agreement that outline the roles and responsibilities of each organization; however, the discussions are in the preliminary stages as ASPR continues to collaborate with each organization to understand their resource gaps and capabilities. Continuing to understand each ESF#8 support agency’s potential capabilities and its limitations—knowing that the actual capacity of these capabilities may fluctuate—is important, as evidenced by the misalignment that occurred during the response. Until ASPR can better identify the capabilities and limitations of support agencies to meet ESF#8 core capabilities, ASPR cannot, as the coordinator, determine whether the ESF is prepared for future disasters. Reliance on DOD support. As the coordinator, ASPR is responsible for ensuring that appropriate planning and preparedness activities are undertaken. This includes planning for a scenario in which DOD assistance is unavailable. We have previously reported that DOD provided much of the ESF#8 support during the initial response to Hurricanes Irma and Maria, which may not always be available in future responses. DOD’s support included providing the core capabilities of patient care (through the provision of Area Support Medical Companies, among other medical facilities) and patient evacuations (through the provision of personnel and transportation to conduct aeromedical evacuations), as mentioned above. We found that ASPR does not have a response strategy that will account for the core capabilities needed to be filled by itself or other support agencies in a large or long-term ESF#8 response if DOD were unable to assist. For example, DOD’s 2017 hurricane after-action report included reliance on DOD as a concern and recommended that HHS and FEMA establish contracts with the commercial sector to ensure the federal government has other options available for larger ESF#8 responses should DOD not have the needed capability or available capacity. Similarly, in September 2018, we reported that ESF lead agencies’ (including ASPR for ESF#8) dependence on DOD capabilities was a challenge for DOD during the response to Hurricanes Irma and Maria. We reported that the increased reliance may create vulnerability, if in the future, DOD capabilities are needed to conduct its primary mission—to defend the nation from threats—at the same time its support is needed for a domestic disaster response. ASPR told us that it does not have a contingency plan for a response in DOD’s absence, because for large-scale events, such as Hurricanes Irma and Maria, ASPR has to rely on DOD, given ASPR’s own resource constraints. ASPR officials stated that, in general, ASPR’s resource response capacity—personnel and supplies—can support a response to two simultaneous events that occur in different areas in the Continental United States for 30 days. Beyond that, ASPR has to rely on other agencies, including DOD, which occurred with Hurricanes Irma and Maria. However, ASPR officials did state that the agency has recently taken some steps to reduce its reliance on DOD. Specifically, in September 2018, ASPR entered into a contract with a private company to provide medical personnel teams similar to Disaster Medical Assistance Teams that can be utilized to supplement ASPR response personnel, especially if DOD resources are not available. Similarly, to assist with future patient evacuations, in October 2018, the agency entered into contracts with private companies for commercial air ambulance transport. In addition, ASPR officials told us that through ASPR’s participation in the Whole of Government Logistics Council, the agency has begun to further discuss air transport options during major disasters with other agencies including FEMA, DOD, and VA. However, ASPR officials also stated there is a need to hold discussions with all agencies involved in the ESFs to prioritize and coordinate air transportation during a response in the event that DOD is not available. While these are important steps to potentially minimize reliance on DOD, ASPR’s own capacity constraints make it all the more important for ASPR to a develop response strategy that includes other support agencies in the event that DOD support is unavailable. For example, such a strategy could involve conducting an exercise to simulate a large-scale ESF#8 response without DOD capabilities. Until ASPR develops a strategy demonstrating how ESF#8 core capabilities can be provided through HHS and its support agencies without DOD’s assistance, it risks being unprepared to respond to a large-scale disaster or multiple disasters if they occur when DOD’s capabilities are limited due to other events, such as military missions. While ASPR Has Completed a Draft After-Action Report to Evaluate Its Response, It is Missing Key Perspectives ASPR completed a draft after-action report in February 2018 after several months of collecting feedback from HHS staff on the strengths and areas for improvement in the agency’s 2017 ESF#8 response activities; however, the draft is missing the perspectives of key parties involved in the response. Not collecting the perspectives of key parties involved in the response is inconsistent with federal standards for information and communication, which state that management needs access to relevant information from external parties to help achieve objectives and address related risks. Further, the Standard for Program Management states that program managers should actively engage key stakeholders throughout the life cycle of a program, which would include any evaluation activities. Specifically, when collecting feedback, ASPR did not reach out directly to support agencies, territorial governments in the U.S. Virgin Islands and Puerto Rico, or other stakeholders intimately involved in the response. Instead, ASPR gathered observations through facilitated discussions, or “hotwashes,” with HHS personnel stationed at key response sites in headquarters and the field, such as personnel stationed at the HHS Secretary’s Operations Center and those stationed at medical sites in Puerto Rico. In addition, ASPR distributed an electronic feedback link to all personnel involved in the HHS ESF#8 response, both in the field and headquarters. ASPR officials stated they did not obtain feedback directly from outside parties, such as support agencies or territorial governments, during the after-action review because the review was focused on internal aspects of the HHS response. Instead, the officials said that FEMA—as the overall lead for the federal response—typically writes the overall after-action report for the whole federal government, and those perspectives would be captured there. However, FEMA’s after-action report was focused only on its response activities for the 2017 hurricanes and did not include any strengths or areas for improvement related to ESF#8. Because ASPR did not obtain feedback from its ESF#8 support agencies and other partners, its draft after-action report dated February 2018 has key gaps in its assessment. For example, three of the deficiencies we identified based on our review of documentation and interviews with agency and territory officials—the delay in tracking evacuated patients, the final status of some evacuated patients not readily available, and the reliance on DOD support—were not included in ASPR’s draft after-action report. This indicates that key perspectives, and related lessons learned, were missing from ASPR’s after-action review. Similarly, FEMA officials said that during the course of soliciting feedback on its own response actions, FEMA’s provider of NDMS medical evacuation transportation for Hurricanes Irma and Maria said that if ASPR had reached out, it would have identified challenges with the NDMS patient evacuations conducted. In particular, the provider told FEMA that patients were evacuated to an airport in the continental United States with limited hours of availability, and if patients had to be evacuated outside of those hours, they were sent to other airports with inadequate medical care, so the patients needed to be transported again as a result. Without an after-action report that includes the perspectives of all key parties—including ESF#8 support agencies—ASPR management is likely to lack the necessary information to comprehensively identify all strengths and areas for improvement of its ESF#8 response. Conclusions The catastrophic destruction encountered as a result of Hurricanes Irma and Maria proved overwhelming to the U.S. Virgin Islands and Puerto Rican governments and resulted in a large federal disaster response, complicated by losses of power, communication, transportation, and health care infrastructure in the territories. ASPR and its support agencies, such as DOD, undertook numerous actions to address the public health and medical needs in the territories—including evacuating critical care and dialysis patients from the U.S. Virgin Islands and Puerto Rico. Nevertheless, key deficiencies with ASPR’s leadership of the response resulted in confusion and resource strain among responders from support agencies and territory health departments at emergency operations centers in the U.S. Virgin Islands. The deficiencies also resulted in service access teams having to search for evacuated patients, ASPR’s inability to readily and reliably identify the final status of all evacuated patients, and disaster survivors in isolated areas potentially not receiving needed health care. ASPR’s leadership also led to an inefficient use of federal resources. Many of the deficiencies were a function of ASPR policy and its preparedness planning, and as such, they could be repeated unless ASPR addresses them. Additionally, the agency remains unprepared to respond to future large-scale disasters if DOD is unavailable. Further, the likelihood that deficiencies will recur in future responses increases, because ASPR did not include feedback from the support agencies involved in the response in its after-action report. Recommendations for Executive Action We are making the following seven recommendations to the Assistant Secretary for Preparedness and Response: ASPR should develop a response personnel strategy to ensure, at a minimum, a lead ASPR liaison officer is consistently at the local emergency operations center(s) during an ESF#8 response and another liaison, if not more, is at strategic location(s) in the area. (Recommendation 1) As ASPR finalizes its federal patient movement framework, the agency should exercise the framework with its NDMS partners to ensure that patients evacuated through NDMS will be consistently tracked from the start of their evacuation. (Recommendation 2) ASPR should put controls in place to ensure data on all NDMS evacuated patients are complete and accurate. (Recommendation 3) ASPR Region II should revise its Incident Response Plans for the territories to include strategies for providing chronic and primary care in isolated communities. These strategies could include the incorporation of Federally Qualified Health Centers and other local health clinics as part of a response. (Recommendation 4) ASPR should work with support agencies to develop and finalize memorandums of agreement that include information on the capabilities and limitations of these agencies to meet ESF#8 core capabilities. (Recommendation 5) ASPR should develop a strategy demonstrating how it ESF#8 core capabilities can be provided through HHS and ESF#8 support agencies if DOD’s capacity to respond is limited. (Recommendation 6) ASPR should take steps to ensure the perspectives of key external parties are incorporated in the development of HHS’s after-action reviews, following future ESF#8 activations. (Recommendation 7) Agency Comments and Our Evaluation We provided a draft of this report for advance review and comment to HHS, DOD, the Department of Homeland Security, VA, and the governments of the U.S. Virgin Islands and Puerto Rico. HHS and VA provided written comments, which we have reprinted in appendixes I and II, respectively. HHS concurred with five of our seven recommendations and stated that it had, or was in the process of, taking action. While we made no recommendations to VA, in its comments VA stated that it looks forward with working with HHS on matters we have presented in this report. HHS and DOD provided technical comments, which we incorporated as appropriate. U.S. Virgin Islands and Puerto Rican government officials stated they had no comments on the draft report. HHS did not concur with a recommendation in the draft report directing ASPR to develop and finalize ESF#8 response plans for the territories that include strategies for providing chronic and primary care in isolated communities. In its comments, HHS stated that while ASPR has federal plans in place that guide federal response, each state and locality is responsible for developing its own individual plans. We modified the language in our report and our recommendation to clarify we are referring to ASPR Region II’s Incident Response Plans for the U.S. Virgin Islands and Puerto Rico. According to a lead ASPR Region II official, these plans are internal agency plans that serve as a playbook for HHS officials during an ESF#8 response in these territories. However, as we reported, these plans do not account for the provision of chronic and primary care in isolated communities. Accordingly, we believe our recommendation is warranted. HHS also did not concur with a recommendation in the draft report that ASPR work with support agencies to develop an inventory to identify the capabilities and limitations of support agencies to meet ESF#8 core capabilities. According to HHS, such an inventory will be out of date immediately after development due to world events and changes in investments, technologies, and priorities. Instead, HHS proposed the continued use of interagency liaison officers at the HHS emergency operations center, as they can provide real-time updates on available resources during a response. We agree that HHS should continue this practice in future responses. However, as is evidenced by the misalignment that we identify in our report, this action was not adequate during the response to Hurricanes Irma and Maria in the U.S. Virgin Islands and Puerto Rico. Further, as we reported, ASPR officials acknowledged that more needs to be done to better understand the resources available from its support agencies. To clarify the intent of our recommendation—that is, that ASPR take steps to ensure it has a sufficient understanding of each ESF#8 support agency’s potential capabilities and its limitations—we modified language in our report and the recommendation. Specifically, we modified our recommendation to direct ASPR to include information on the capabilities of these agencies as it works to develop and finalize memorandums of agreement with support agencies. The memorandums of agreement that ASPR is beginning to draft with support agencies provide an opportunity to begin to address this issue. As we have reported, taking such action is needed to help ensure that future ESF#8 responses are more efficiently and effectively coordinated. We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Health and Human Services, Defense, Homeland Security, Veterans Affairs, and Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or DeniganMacauleyM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Comments from the Department of Health and Human Services Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Mary Denigan-Macauley, (202) 512-7114 or DeniganMacauleyM@gao.gov. Staff Acknowledgments In addition to the contact named above, Kelly DeMots (Assistant Director), Deirdre Gleeson Brown (Analyst-in-Charge), Kenisha Cantrell, Justin Cubilo, and Rebecca Hendrickson made key contributions to this report. Also contributing were Sam Amrhein, Kaitlin Farquharson, and Vikki Porter. Related GAO Products Disaster Response: FEMA and the American Red Cross Need to Ensure Key Mass Care Organizations are Included in Coordination and Planning. GAO-19-526. Washington, D.C.: September 19, 2019. Disaster Response: Federal Assistance and Selected States and Territory Efforts to Identify Deaths from 2017 Hurricanes. GAO-19-486. Washington, D.C.: September 13, 2019. Disaster Assistance: FEMA Action Needed to Better Support Individuals Who Are Older or Have Disabilities. GAO-19-318. Washington, D.C.: May 14, 2019. 2017 Disaster Contracting: Actions Needed to Improve the Use of Post- Disaster Contracts to Support Response and Recovery. GAO-19-281. Washington, D.C.: April 24, 2019. 2017 Hurricane Season: Federal Support for Electricity Grid Restoration in the U.S. Virgin Islands and Puerto Rico. GAO-19-296. Washington, D.C.: April 18, 2019. Disaster Recovery: Better Monitoring of Block Grant Funds is Needed. GAO-19-232. Washington, D.C.: March 25, 2019. Puerto Rico Hurricanes: Status of FEMA Funding, Oversight, and Recovery Challenges. GAO-19-256. Washington, D.C.: March 14, 2019. U.S. Virgin Islands Recovery: Status of FEMA Public Assistance Funding and Implementation. GAO-19-253. Washington, D.C.: February 25, 2019. 2017 Disaster Contracting: Action Needed to Better Ensure More Effective Use and Management of Advance Contracts. GAO-19-93. Washington, D.C.: December 6, 2018. Homeland Security Grant Program: Additional Actions Could Further Enhance FEMA’s Risk-Based Grant Assessment Model. GAO-18-354. Washington, D.C.: September 6, 2018. 2017 Hurricanes and Wildfires: Initial Observations on the Federal Response and Key Recovery Challenges. GAO-18-472. Washington, D.C.: September 4, 2018. Federal Disaster Assistance: Individual Assistance Requests Often Granted but FEMA Could Better Document Factors Considered. GAO-18-366. Washington, D.C.: May 31, 2018. 2017 Disaster Contracting: Observations on Federal Contracting for Response and Recovery Efforts. GAO-18-335. Washington, D.C.: February 28, 2018. Disaster Assistance: Opportunities to Enhance Implementation of the Redesigned Public Assistance Grant Program. GAO-18-30. Washington, D.C.: November 8, 2017.
Why GAO Did This Study Hurricanes Irma and Maria hit the U.S. Virgin Islands and Puerto Rico within two weeks of each other in September 2017, causing catastrophic damage. HHS is responsible for leading the federal public health and medical services response during a disaster, such as these hurricanes. As part of its lead federal role during these hurricanes, HHS called upon support agencies, including the Departments of Defense, Homeland Security, and Veterans Affairs, to assist with the public health and medical services response. GAO was asked to review the federal public health and medical services response to Hurricanes Irma and Maria in the U.S. Virgin Islands and Puerto Rico. This report examines HHS's actions and leadership of this response, among other things. GAO reviewed documentation on the preparedness for, and response to, the hurricanes. It also interviewed federal and territory officials and interviewed or received written responses from eight nonfederal stakeholders involved in the response, such as nongovernmental organizations. GAO identified these stakeholders through research and referrals. What GAO Found The catastrophic destruction encountered as a result of Hurricanes Irma and Maria proved overwhelming to the U.S. Virgin Islands and Puerto Rican governments and resulted in a large federal disaster response, complicated by losses of power, communication, and health care infrastructure. The Department of Health and Human Services (HHS) led the federal public health and medical services response and undertook numerous actions to address the needs in the territories—including evacuating critical care and dialysis patients from the U.S. Virgin Islands and Puerto Rico and providing medical personnel and facilities. However, GAO identified several shortcomings in HHS's leadership. While the scale, location, and timing of these storms complicated response efforts, the deficiencies GAO identified were in many cases a function of preparedness policies, or lack thereof. As a result, they could adversely affect future large-scale responses unless addressed. For example, as the lead agency, HHS is responsible for ensuring that appropriate planning activities are undertaken, including monitoring the federal ability to provide core public health and medical services response capabilities. However, GAO found that HHS did not have a full understanding of the capabilities and limitations of its support agencies, including the Departments of Defense, Homeland Security, and Veterans Affairs. Consequently, HHS's needs were not always aligned with the resources that its support agencies could provide, resulting in some deployed resources not being properly and efficiently utilized. For example, HHS requested Department of Defense medical teams, but these teams specialized in trauma and surgical care, not the chronic and primary care needed. HHS lacked plans for the territories that accounted for the chronic and primary care needs in isolated communities. This care was greatly needed, given that many, especially the elderly, could not easily access hospitals. What GAO Recommends GAO is making seven recommendations, including that HHS develop agreements with support agencies that include response capability and limitation information, and develop response plans for providing care in isolated communities. HHS disagreed with two of the seven citing, among other things, territory responsibility for plans. GAO clarified the intent of the two recommendations and believes that all seven are warranted.
gao_GAO-19-615
gao_GAO-19-615_0
Background Life Cycle of Oil and Gas Wells Oil and gas exploration and production involves disturbing lands in several ways. For example, when operators drill oil and gas wells, they typically remove topsoil and construct a well pad, where the drilling rig will be located. Other equipment on-site can include generators and fuel tanks. In addition, reserve pits are often constructed to store or dispose of water, mud, and other materials that are generated during drilling and production, and roads and access ways are often built to move equipment to and from the wells. Once wells cease production, which may occur many decades after they are drilled, they can become inactive. Inactive wells have the potential to create physical and environmental hazards if operators do not properly reclaim them, a process that may involve plugging the well, removing structures, and reshaping and revegetating the land around the wells. For example, inactive wells that are not properly plugged can leak methane into the air or contaminate surface water and groundwater. Well sites that are not properly reclaimed can contribute to habitat fragmentation and soil erosion, and equipment left on-site can interfere with agricultural land use and diminish wildlife habitat. Costs for well reclamation vary widely and are affected by factors such as the depth of the well. Although BLM does not estimate reclamation costs for all wells, it has estimated reclamation costs for thousands of wells whose operators have filed for bankruptcy. Based on our analysis of these estimates, we identified two cost scenarios: low-cost wells typically cost about $20,000 to reclaim, and high-cost wells typically cost about $145,000 to reclaim. BLM’s Bonding Regulations and Policies As shown in figure 1, BLM regulations or policies outline how BLM is to initially collect bonds from operators, review bonds, and ultimately return the bond to the operator or use it to cover costs of reclamation. Bonds collected from operator. BLM regulations require operators to submit a bond to ensure compliance with all of the terms and conditions of the lease, including, but not limited to, paying royalties and reclaiming wells. BLM regulations generally require operators to have one of the following types of bond coverage: individual lease bonds, which cover all of an operator’s wells under one lease; statewide bonds, which cover all of an operator’s leases and operations in one state; or nationwide bonds, which cover all of an operator’s leases and operations nationwide. (See figure 2.) BLM can accept two types of bonds: surety bonds and personal bonds. A surety bond is a third-party guarantee that an operator purchases from a private insurance company approved by the Department of the Treasury. The operator pays a premium to the surety company that can vary depending on various factors, including the amount of the bond and the assets and financial resources of the operator. If operators fail to reclaim their wells, the surety company is responsible for paying BLM up to the amount of the bond to help offset reclamation costs. A personal bond must be accompanied by one of the following financial instruments: certificates of deposit issued by a financial institution whose deposits are federally insured, granting the Secretary of the Interior full authority to redeem it in case of default in the performance of the terms and conditions of the lease; cashier’s checks; negotiable Treasury securities, including U.S. Treasury notes or bonds, with conveyance to the Secretary of the Interior of full authority to sell the security in case of default in the performance of the lease’s terms and conditions; or irrevocable letters of credit that are issued for a specific term by a financial institution whose deposits are federally insured and meet certain conditions and that identify the Secretary of the Interior as sole payee with full authority to demand immediate payment in case of default in the performance of the lease’s terms and conditions. BLM bond reviews. BLM regulations provide flexibility to increase bonds above minimums and require increases above minimum amounts if operators meet certain criteria. Specifically, BLM regulations require BLM to increase the bond amount when an operator who applies for a new drilling permit had previously failed to reclaim a well in a timely manner. For such an operator, BLM must require a bond in an amount equal to its cost estimate for reclaiming the new well if BLM’s cost estimate is higher than the regulatory minimum amount. BLM regulations also authorize increases in the bond amount—not to exceed the estimated cost of reclamation and any royalties or penalties owed—whenever the authorized officer determines that the operator poses a risk due to factors such as that the expected reclamation costs exceed the present bond. In response to our previous recommendation in 2011 that BLM develop a comprehensive strategy to revise its bond adequacy review policy to more clearly define terms and conditions that warrant a bond increase, BLM issued a bond adequacy review policy in July 2013, Instruction Memorandum 2013-151. The policy contained directives for conducting reviews when bonds meet certain criteria. Specifically, the 2013 bond adequacy review policy called for field offices to, among other things, review each bond at least every 5 years to determine whether the bond value appropriately reflected the level of potential risk posed by the operator. If it did not, authorized officers were to propose an increase (or decrease) in the bond value. In November 2018, BLM issued a revised bond adequacy review policy, Instruction Memorandum 2019-014, which supersedes the 2013 policy. The 2018 policy continues to call for field offices to review each bond at least every 5 years, but it revised the point system worksheet that field offices are to use when determining whether a bond increase (or decrease) is warranted. Also, in response to our 2018 recommendation that BLM ensure that the reviews of nationwide and statewide bonds reflect the overall risk presented by operators, the 2018 policy calls for additional coordination between BLM headquarters, state offices, and field offices when reviewing nationwide and statewide bonds. BLM returns or uses bond. If operators reclaim their wells, BLM returns the bond to the operator. Many decades may pass between when BLM collects a bond and when it is returned. If operators do not reclaim their wells, BLM may redeem the certificate of deposit, cash the check, sell the security, or make a demand on the letter of credit to pay the reclamation costs. Liability for reclaiming a well on onshore federal lands can fall to either the lease holder or the operator, and BLM may also hold past owners or operators liable. The liability for past owners or operators extends only to reclamation obligations that accrued before BLM approved the transfer of their lease to a subsequent lessee. They are not liable for reclamation and lease obligations incurred after that transfer is approved. Average Bond Values Per Well Were Slightly Lower in 2018 as Compared to 2008 Based on our review of BLM data, the value of bonds held by BLM for oil and gas operations on a per-well basis were slightly lower in 2018 as compared to 2008. Although the total value of bonds held by BLM for oil and gas operations was higher in 2018 than in 2008 (about $204 million compared to about $188 million, in 2018 dollars), the average bond value per well was slightly lower because the number of wells on federal land was also higher in 2018 than in 2008 (96,199 wells compared to 85,330). Specifically, in 2008, BLM held bonds worth an average of $2,207 per well in 2018 dollars.23, BLM held bonds worth an average of $2,122 per well in 2018, a decrease of 3.9 percent as compared to 2008 (see table 1). BLM bonds do not typically cover an individual well; however, we calculated the average bond value on a per-well basis (bond amount divided by the number of wells covered by the bond) to compare the value over time adjusted for the increased number of wells. When reporting on all wells, we calculated the average bond value per well as the aggregate value of all BLM bonds divided by the total number of producible well bores. Appendix I provides additional information on our scope and methodology. category for bonds that were linked to wells in the data. We found that, on average, as of 2018 an individual lease bond covered about 10 wells, a statewide bond covered about 49 wells, and a nationwide bond covered 374 wells. However, some bonds cover more than the typical number of wells and some fewer. As of 2018, individual lease bonds had the highest average bond value per well at $2,691, and nationwide bonds had the lowest average bond per well value at $890. Statewide bonds had an average bond value per well of $1,592. The share of the total value of bonds held by BLM that are individual lease, statewide, or nationwide bonds differed in 2018 from 2008 (see Figure 3). The share of individual lease bonds was slightly higher in 2018 as compared to 2008 (about 8 percent in 2008 and about 9 percent in 2018). In 2008, statewide bonds represented about 80 percent (approximately $130 million) of the total value of bonds. In 2018, statewide bonds represented about 59 percent of total bond value (approximately $120 million), but this category still represented the largest share of total bond value. In contrast, nationwide bonds were a lower share of total bond value in 2008 (about 6 percent, approximately $10.2 million) than in 2018 (30 percent, approximately $61.8 million). BLM officials told us that changes in the composition of the oil and gas industry may have contributed to these changes in the composition of bonds. In particular, officials said some larger companies may have expanded their operations in recent years, sometimes acquiring smaller companies. Large companies with expansive operations are more likely than small companies to have nationwide bonds because such bonds can cover operations in multiple states, which statewide and individual lease bonds do not. Therefore, an industry shift to larger companies would tend to increase the share of nationwide bonds. Bonds Held by BLM Are Insufficient to Prevent Orphaned Wells Bonds Do Not Provide Sufficient Financial Assurance to Prevent Orphaned Wells Bonds do not provide sufficient financial assurance to prevent orphaned wells for several reasons. First, BLM has identified new orphaned wells— wells whose bonds were not sufficient to pay for needed reclamation when operators or other parties failed to reclaim them. As we reported in May 2018, BLM does not track the number of orphaned wells over time and so cannot identify how many wells became orphaned over specific time frames. However, our analyses of BLM’s orphaned well lists from different years have shown that BLM has continued to identify new orphaned wells since 2009. We reported in January 2010 that BLM identified 144 orphaned wells in 2009. Then, in May 2018, we reported that BLM identified 219 orphaned wells in July 2017—an increase of 75 orphaned wells. In April 2019, BLM provided a list of 296 orphaned wells that included 89 new wells that were not identified on the July 2017 list. Bonds are not sufficient to prevent orphaned wells in part because they do not reflect full reclamation costs for the wells they cover. Bonds that are high enough to cover all reclamation costs provide complete financial assurance to prevent orphaned wells because, in the event that an operator does not reclaim its wells, BLM can use the bond to pay for reclamation. On the other hand, bonds that are less than reclamation costs may not create an incentive for operators to promptly reclaim wells after operations cease because it costs more to reclaim the wells than the operator could collect from its bond. We analyzed bonds that are linked to wells in BLM’s data, and found that most of these bonds would not cover reclamation costs for their wells. Specifically, we compared the average bond coverage available for these wells to the two cost scenarios we described above. About 84 percent of these bonds—covering 99.5 percent of these wells—would not fully cover reclamation costs under a low-cost scenario (these bonds have an average value per well of less than $20,000). Less than 1 percent of bonds—covering less than 0.01 percent of these wells—would be sufficient to reclaim all the wells they cover if they were high cost (these bonds have an average value per well of $145,000 or more). The remaining bonds—about 16 percent—have average bond values per well of between $20,000 and less than $145,000. The majority of bond values do not reflect reclamation costs in large part because most bonds—82 percent—remain at their regulatory minimum values. These regulatory minimums are not reflective of reclamation costs for a number of reasons: Regulatory bond minimums have not been adjusted since the 1950s and 1960s to account for inflation. As shown in figure 4, when adjusted to 2018 dollars, the $10,000 individual lease bond minimum would be about $66,000, the $25,000 statewide bond minimum would be about $198,000, and the $150,000 nationwide bond minimum would be about $1,187,000. Bond minimums are based on the bond category and do not adjust with the number of wells they cover, which can vary greatly. According to BLM’s data, in 2018 the number of wells covered by a single bond ranged from one well to 6,654 wells. On average, a single bond covered about 68 wells. As wells are added to a bond, the total associated reclamation cost increases even if the bond value does not. A bond that increases with each additional well it covers and then decreases as wells are reclaimed could increase the financial incentive for operators to reclaim their wells in a timely manner. This is because operators would have to contribute additional bond value or would recover some bond value when they add or reclaim a well, respectively. Currently, bond minimums do not automatically adjust in this manner and therefore provide limited financial incentives for an operator to reclaim wells in a timely manner. Bond minimums do not reflect characteristics of individual wells such as depth or location, but such characteristics can affect reclamation costs, according to BLM officials. Wells are being drilled deeper than in the past; in 1950, well depth averaged about 3,700 feet, and in 2008, it averaged about 6,000 feet. Newer wells may be drilled 10,000 feet vertically. Officials from one BLM field office told us they assume a cost of $10 per foot of well depth to plug a well, so as wells are drilled deeper, plugging costs typically increase proportionally. Additionally, the location of some wells makes them more expensive to reclaim. For example, BLM officials told us about several wells that may cost three times more to reclaim than other nearby wells because they are located in the middle of a river, making them hard to reach. In addition to BLM having identified orphaned wells over the last decade, we identified inactive wells at increased risk of becoming orphaned and found their bonds are often not sufficient to reclaim the wells. Our analysis of BLM bond value data and Office of Natural Resources Revenue production data showed a significant number of inactive wells remain unplugged and could be at increased risk of becoming orphaned. Specifically, we identified 2,294 wells that may be at increased risk of becoming orphaned because they have not produced since June 2008 and have not been reclaimed. Further, for a majority of these at-risk wells, their bonds are too low to cover typical reclamation costs for just these at-risk wells. Our analysis of oil and gas production data showed these wells have not produced oil or gas or been used in other ways, such as serving as injection wells, since at least June 2008, when oil and gas prices were at or near record highs. Given that the Energy Information Administration projects oil and natural gas prices will remain at levels significantly below the 2008 highs through 2050, it is unlikely price will motivate operators to reopen these wells. Some of these wells have been inactive for far longer. Since these at-risk wells are unlikely to produce again, an operator bankruptcy could lead to orphaned wells unless bonds are adequate to reclaim them. If the number of at-risk wells is multiplied by our low-cost reclamation scenario of $20,000, it implies a cost of about $46 million to reclaim these wells. If the number of these wells is multiplied by our high-cost reclamation scenario of $145,000, it implies a cost of about $333 million. When we further analyzed the available bonds for these at-risk wells, we found that most of these wells (about 77 percent) had bonds that would be too low to fully reclaim the at- risk wells under our low-cost scenario. More than 97 percent of these at- risk wells have bonds that would not fully reclaim the wells under our high-cost scenario. BLM has a policy for reviewing the adequacy of bonds but has not been able to consistently secure bond increases when needed, and this policy has not resulted in bonds that would be adequate to reclaim most wells. BLM’s bond adequacy review policy calls for field office staff to review oil and gas bonds at least every 5 years to determine whether the bond amount appropriately reflects the level of potential risk posed by the operator. However, according to BLM documentation, its offices did not secure about 84 percent of the proposed bond increases in fiscal years 2016 and 2017. BLM officials at one field office and one state office noted it is difficult to secure increases from bond reviews when firms are already in difficult financial situations. In November 2018, BLM updated its bond adequacy review policy and called for the agency to focus on securing bond increases from operators that show the highest risk factors. BLM’s updated policy more explicitly lays out steps to secure bond increases, including that BLM should not approve new applications to drill from an operator while waiting for a bond increase. The new policy also gives BLM officials discretion to not pursue a bond increase after considering other priorities demanding staff time and workload. It is unclear whether the update will improve BLM’s ability to secure bond increases, as it may not address the underlying challenge of attempting to increase bonds from operators who are already in a difficult financial position. While BLM’s federal oil and gas bond minimums do not sufficiently reflect the costs of well reclamation, requirements for bond amounts for other federal mining and energy development activities account for potential reclamation costs to some extent. For example, for bonds for surface coal mining and hardrock mining on federal lands, the Department of the Interior requires bond amounts based on the full estimated cost of reclamation. For grants of federal rights-of-way for wind and solar energy development in designated leasing areas, BLM requires bonds based on a minimum amount per wind turbine or per acre of solar. For such grants in all other areas, the bonds are based on the estimated cost of reclamation but cannot be less than the per-turbine or per-acre amounts previously mentioned. Additionally, some states have minimum bond requirements for oil and gas wells on lands in the state that, unlike federal bond minimums, adjust with the number of wells they cover or the characteristics of the wells, or both. For example, Texas and Louisiana offer operators with wells on lands in those states the choice of a bond based on total well depth or based on the number of wells. Specifically, the Texas Railroad Commission lets operators choose bonds based on either the total depth of all wells on lands in the state multiplied by $2 per foot, or minimums based on the number of wells covered. If operators choose the latter, the bond for 0 to 10 wells is $25,000; the bond for 11 to 99 wells is $50,000; and the bond for 100 or more wells is $250,000. In Louisiana, the Office of Conservation offers operators with wells on lands in the state the choice of a bond based on total well depth or based on the number of wells. Louisiana further specifies a multiplier that varies depending on the total depth of the well. For example, the bond calculation is $2 per foot for wells less than 3,000 feet deep, $5 per foot for wells from 3,001 to 10,000 feet deep, and $4 per foot for wells 10,001 feet deep or deeper. Operators in Louisiana can alternatively choose to follow a system based on number of wells, with a minimum bond for 10 or fewer wells set at $50,000, a minimum bond for 11 to 99 wells set at $250,000, and a minimum bond for 100 or more wells set at $500,000. Pennsylvania’s Department of Environmental Protection requires bonds for unconventional wells that vary based on the number of wells and well bore length. The Mineral Leasing Act of 1920, as amended, requires federal regulations to ensure that an adequate bond is established before operators begin surface-disturbing activities on any lease, to ensure complete and timely reclamation of the lease tract as well as land and surface waters adversely affected by lease operations. The Mineral Leasing Act of 1920 does not require that BLM set bonds at full reclamation costs. However, the gap between expected reclamation costs and minimum bond amounts has grown over time because the minimums have not been adjusted since they were established in the 1950s and 1960s, whereas reclamation costs have increased due to inflation and the changing characteristics of wells being drilled. In the absence of bond levels that more closely reflect expected reclamation costs, such as by increasing regulatory minimums and incorporating consideration of the number of wells on each bond and their characteristics, BLM will continue to face risks that its bonds will not provide sufficient financial assurance to prevent orphaned wells. In particular, adjusting bond minimums so that bonds more closely reflect expected reclamation costs up front could help decrease the need for bond increases later when companies are potentially in financial distress. BLM Does Not Currently Assess User Fees to Fund Orphaned Well Reclamation In addition to fulfilling its responsibility to prevent new orphaned wells, it falls to BLM to reclaim wells that are currently orphaned, and BLM has encountered challenges in doing so. We reported in May 2018 that 13 BLM field offices identified about $46.2 million in estimated potential reclamation costs associated with orphaned wells and with inactive wells that officials deemed to be at risk of becoming orphaned. There is also a risk more wells will become orphaned in coming years, as we described above. Based on the most recent orphaned well lists we received from BLM, 51 wells that BLM identified in 2009 as orphaned had not been reclaimed as of April 2019. The Energy Policy Act of 2005 (EPAct 2005) directs Interior to establish a program that, among other things, provides for the identification and recovery of reclamation costs from persons or other entities currently providing a bond or other financial assurance for an oil or gas well that is orphaned, abandoned, or idled. One way in which BLM may be able to accomplish this is through the imposition of user fees. In 2008, we found that well-designed user fees can reduce the burden on taxpayers to finance those portions of activities that provide benefits to identifiable users. Further, according to Office of Management and Budget guidance, it may be appropriate for an agency to request authority to retain the fee revenue if the user fees offset the expenses of a service that is intended to be self-sustaining. The volume of drilling applications and inactive wells provide an opportunity to fund reclamation costs. According to BLM data, the agency processes more than 3,500 applications to drill each year, on average, and has over 14,000 inactive wells. Based on our calculations, a separate fee of about $1,300 charged at the time a drilling application is submitted (in addition to the current drilling application filing fee, which is $10,050), or an annual fee of less than $350 for inactive wells could generate enough revenue to cover, in a little over a decade, the entire $46 million potential reclamation costs field offices identified to us. In commenting on a draft of this report, BLM stated that it does not have the authority to seek or collect fees from lease operators to reclaim orphaned wells. Developing a mechanism to obtain funds from operators to cover the costs of reclamation, consistent with EPAct 2005, could help ensure that BLM can completely and timely reclaim wells without using taxpayer dollars. Other federal programs, including other BLM programs, collect fees from users to fund reclamation activities. For example, the federal government collects fees from mining companies to reclaim abandoned mines. Specifically, the federal abandoned mine reclamation program is funded in part by fees on coal production. We reported in March 2018 that the program had spent about $3.9 billion to reclaim abandoned mine lands since the program’s creation in 1977. Additionally, some states with oil and gas development have dedicated funds for reclaiming orphaned wells. In Wyoming, the state’s Oil and Gas Conservation Commission’s Orphan Well Program reclaims orphaned wells on state or private lands for which bonds and operator liability are unavailable or insufficient to fund reclamation. The program is funded through a conservation tax assessed on the sale of oil and natural gas produced in Wyoming. Through this program, the Wyoming Oil and Gas Conservation Commission has reclaimed approximately 2,215 wells since 2014, according to a Commission official. Similarly, in Arkansas, operators make annual payments to its abandoned well plugging fund based on the number of wells and permits they have, on a sliding scale. For example, at the low end, operators with one to five wells or permits pay $100 per well, and at the high end, operators with over 300 wells or permits pay $4,000 per operator. The Arkansas fund was used to reclaim 136 wells in fiscal years 2016 through 2018, according to an official with the state’s Oil and Gas Commission. Virginia’s Orphaned Well Fund is funded through a $200 surcharge on each permit application. The fund is administered by the Virginia Division of Gas and Oil, which prioritizes wells to reclaim according to their condition and potential threat to public safety and the environment. Conclusions BLM oversees private entities operating thousands of oil and gas wells on leased federal lands and has taken steps over the years to strengthen its management of the potential liability that oil and gas operations represent should operators not fully reclaim wells and return lands to their original condition when production ceases. For example, the agency’s 2013 bond adequacy review policy outlined how bonds were to be reviewed every 5 years and bond amounts adjusted depending on risks presented by operators. However, we found average bond values were slightly lower in 2018 as compared to 2008 and BLM has not obtained bond increases for the majority of instances in which its reviews identify that increases are needed. Instead, most bonds are at their regulatory minimum values, which are not sufficient to cover reclamation costs incurred by BLM. Without adjusting bond levels to more closely reflect expected reclamation costs—such as by considering the effects of inflation, the number of wells covered by a single bond, and the characteristics of those wells—BLM faces ongoing risks that not all wells will be completely and timely reclaimed, resulting in additional orphaned wells. Further, BLM faces a backlog of orphaned wells to reclaim—with 51 dating back at least 10 years. Unlike some other federal and state programs that obtain funds from industry through fees or dedicated funds, BLM does not do so for reclaiming orphaned wells. According to BLM, it does not have the authority to seek or collect fees from lease operators to reclaim orphaned wells. Authorizing and requiring the implementation of a mechanism to obtain funds from oil and gas operators to cover the costs of reclamation could help ensure BLM can completely and timely reclaim wells. Matter for Congressional Consideration Congress should consider giving BLM the authority to obtain funds from operators to reclaim orphaned wells, and requiring BLM to implement a mechanism to obtain sufficient funds from operators for reclaiming orphaned wells. (Matter for Consideration 1) Recommendation for Executive Action The Director of BLM should take steps to adjust bond levels to more closely reflect expected reclamation costs, such as by increasing regulatory minimums to reflect inflation and incorporating consideration of the number of wells on each bond and their characteristics. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to BLM for comment. In its written comments, reproduced in appendix II, BLM concurred with the recommendation. BLM stated that it is committed to ensuring that its field offices continue to review oil and gas bonds at least every 5 years, or earlier when warranted, and noted its November 2018 Instruction Memorandum 2019-014 updated its bond review policy. BLM further stated that, while the adjustment of bond values may not reflect the inflation index, the policy is intended to increase bond amounts while fostering an environment conducive to BLM’s leasing operations. As we point out in this report, BLM has historically had difficulties securing bond increases through bond reviews, and so additional steps may be needed to adjust bond levels to more closely reflect expected reclamation costs. In the draft we provided to BLM for comment, we included a recommendation that the Director of BLM should take steps to obtain funds from operators for reclaiming orphaned wells. BLM did not concur with this recommendation, saying it does not have the authority to seek or collect fees from lease operators to reclaim orphaned wells. We continue to believe a mechanism for BLM to obtain funds from oil and gas operators to cover the costs of reclamation for orphaned wells could help ensure BLM can completely and timely reclaim these wells, some of which have been orphaned for at least 10 years. We have therefore instead made a matter for Congressional consideration. BLM also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report (1) describes the value of bonds for oil and gas wells in 2018 compared to 2008, and (2) examines the extent to which the Bureau of Land Management’s (BLM) bonds ensure complete and timely reclamation and thus prevent orphaned wells. To describe the value of bonds for oil and gas wells in 2018 compared to 2008, we analyzed oil and gas well data from BLM’s Automated Fluid Minerals Support System (AFMSS) as of May 2018 and data from BLM’s Legacy Rehost 2000 (LR2000) system on bonds as of May 2018. Bond data we reviewed included the bond category (e.g., individual lease or nationwide) and bond value. We compared these data to data obtained from the same systems for 2008 and reported by GAO in 2010. We matched the May 2018 data from the two systems based on the bond number—a variable in both systems—to identify how many wells were covered by each bond and to determine the average bond value per well for each bond category. To assess the reliability of AFMSS and LR2000 data elements, we reviewed agency documents, met with relevant agency officials, and performed electronic testing. We found these data to be sufficiently reliable for our purposes. We also interviewed BLM headquarters officials to understand why bond composition may have changed over time. To report on the number of bonded wells held by BLM, we used a published BLM value for producible well bores—wells capable of production—which should represent a lower bound on the number of bonded wells in September 2018 because some wells may be plugged or temporarily incapable of production but would still require a bond if the surrounding site had not been fully reclaimed. To determine the average value of bonds per well in 2018, we divided the total value of all bonds held by BLM by the total number of producible well bores. To examine the extent to which BLM’s bonds ensure complete and timely reclamation and prevent orphaned wells, we conducted the following analyses: Reclamation cost scenarios: To determine whether bonds are sufficient to cover potential reclamation costs for the wells they cover, we identified typical high- and low-cost scenarios for well reclamation (including plugging the well and reclaiming the surrounding well site) and compared those scenarios to the average bond value available per well. To determine high- and low-cost reclamation scenarios, we analyzed BLM’s well reclamation cost estimates on proofs of claim submitted to the Department of Justice from calendar year 2016 through May 2018. These 59 proofs of claim listed estimated reclamation costs for 8,664 well sites. We calculated the average reclamation cost per well for each individual proof of claim by dividing the total dollar value claimed for reclamation liability (actual liability plus potential liability) by the total number of wells listed in each proof of claim document. We found the average reclamation cost estimates for each proof of claim have a bimodal distribution, meaning that data are clustered around two distinct cost levels, rather than clustered around a single average cost. As a result, we determined that using two separate measures that indicate typical values for separate groups of low-cost and high-cost wells would provide more meaningful statistics about cost. We therefore selected reclamation costs of $20,000 for the low-cost reclamation scenario and $145,000 for the high-cost scenario based on the 25th and 75th percentiles of the distribution of average estimated reclamation cost per proof of claim, weighted by the number of wells on each proof of claim. Bond value per well: To determine the average bond value available per well, we analyzed bonds listed in LR2000 that were tied to wells listed in AFMSS using the bond number—a variable in both systems. We found that 1,547 out of the 3,357 unique bond numbers in LR2000 had wells tied to them in AFMSS. These 1,547 bonds covered about 80 percent of the wells in AFMSS. The other 20 percent of wells in AFMSS either did not list a bond number, or the bond number listed was not in LR2000. For each bond in LR2000 covering wells in AFMSS, we calculated the bond available per well as the bond value divided by the number of wells it covers. We then compared the bond values per well against both high ($145,000 per well) and low ($20,000 per well) reclamation cost scenarios to identify which bonds would be adequate to reclaim all the wells they covered under different cost scenarios. If AFMSS bond information was incomplete, it is possible that there are more wells covered by bonds than we were able to identify—and therefore the bond value per well would be lower than we found. At-risk wells: To identify wells that may be at greater risk of becoming orphaned and determine whether their bonds are sufficient to cover potential reclamation costs, we used well production data from the Office of Natural Resources Revenue’s Oil and Gas Operations Report (OGOR) as of June 2017 and bond values from LR2000. First, we defined wells as “at risk of becoming orphaned” if they met several criteria. Specifically, we identified wells that (1) had recent OGOR reports (on or after March 2017); (2) had not been used productively from at least June 2008 through the most recent record (meaning the well did not report producing any volume of oil or gas during this timeframe, nor were any volume of water or materials injected into the well during this timeframe); (3) were not being used as a monitoring well in the most recent record, which we considered a productive use; and (4) had not been plugged and abandoned. We selected June 2008 as the cutoff date for productivity because in June and July of 2008, oil and gas prices hit peaks that have not since been reached again, and which the Energy Information Administration does not expect prices to reach again through at least 2050. We believe our analysis is a conservative estimate of wells at greater risk, in part because we did not include wells that produced when prices were at their peaks and stopped producing soon afterward and may be unlikely to produce in the future unless prices reach the same peaks again. In addition, our lower-bound estimate does not include some coalbed methane wells that have been inactive for less than 9 years but are unlikely to produce at current prices because of the relatively higher cost of coalbed methane production. We also excluded wells that reported any volume of oil or gas production or water injection since June 2008, although some very low-producing wells may also be at risk of becoming orphaned. Bond value for at-risk wells: To calculate the average bond value per at-risk well, we identified bonds listed in LR2000 that were tied to at- risk wells in AFMSS to determine the value of bonds available to reclaim these at-risk wells if needed. We identified 2,041 of the 2,294 at-risk wells were linked to bonds. For each bond, we divided the bond value by the number of at-risk wells it covered to determine the bond amount per at-risk well. In cases in which an at-risk well was linked to more than one bond, we additionally calculated the average of the bond value per at-risk well for each bond linked to the well. To determine the sufficiency of bonds for at-risk wells, we identified the number of wells with an average bond value per at-risk well equal to or greater than $20,000 (low cost reclamation scenario) or $145,000 (high cost reclamation scenario). Orphaned wells: We compared three lists of orphaned wells based on data provided by BLM in 2009, July 2017, and April 2019. The 2009 data are from our January 2010 report, which used Orphaned Well Scoring Checklists that list information such as the well’s name and location. The July 2017 data are from our May 2018 report, which used an orphaned well list generated through a query of AFMSS by BLM. The April 2019 list was generated through a query of an updated version of AFMSS known as AFMSS 2. We compared the lists to identify how many wells that were on the 2009 list remained on the 2019 list, and how many wells that were on the 2017 list were on the 2019 list. To assess the reliability of the AFMSS, LR2000, and OGOR data elements we used, we reviewed agency documents, met with relevant agency officials, and performed electronic testing. We found these data elements to be sufficiently reliable for our purposes. Similarly, to assess the reliability of the 2019 orphaned well list, we reviewed agency documents and met with relevant agency officials. Though we identified shortcomings with data on orphaned wells, we nevertheless found these data to be sufficiently reliable for the purpose of describing the orphaned wells BLM has identified. To assess the reasonableness of proofs of claim data, we interviewed relevant agency officials and reviewed agency documents. To understand how BLM manages bonds, we reviewed BLM’s policies and interviewed officials from four BLM state offices and four BLM field offices. We selected these state and field offices because, according to AFMSS data, they were responsible for managing the largest numbers of wells on federal land. These BLM state offices were California, New Mexico, Utah, and Wyoming. These BLM field offices were Bakersfield, Buffalo, Carlsbad, and Farmington. We also interviewed officials from BLM’s headquarters office in Washington, D.C. Findings from the selected BLM offices cannot be generalized to officials we did not interview but provide a range of views. To understand how some states with oil and gas development on state lands set minimum bonds and fund orphaned well reclamation, we contacted officials from oil and gas oversight agencies in Arkansas, Louisiana, Pennsylvania, Texas, Virginia, and Wyoming. We conducted this performance audit from January 2018 to September 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of the Interior Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Quindi Franco (Assistant Director), Marietta Mayfield Revesz (Analyst-in-Charge), Marie Bancroft, William Gerard, Cindy Gilbert, Gwen Kirby, Joe Maher, Shaundra Patterson, Dan Royer, and Jerry Sandau made key contributions to this report.
Why GAO Did This Study The oil and natural gas produced from wells on federal lands are important to the U.S. energy supply and bring in billions in federal revenue each year. However, when wells are not properly managed, the federal government may end up paying to clean up the wells when they stop producing. Specifically, wells on federal lands that an operator does not reclaim and for which there are no other liable parties fall to BLM to reclaim (restore lands to as close to their original natural states as possible). These wells become orphaned if the operator's bond held by BLM is not sufficient to cover reclamation costs. BLM regulations set minimum bond values at $10,000 for all of an operator's wells on an individual lease, $25,000 for all of an operator's wells in a state, and $150,000 for all of an operator's wells nationwide. GAO was asked to review the status of oil and gas bonding for federal lands. This report (1) describes the value of bonds for oil and gas wells in 2018 compared to 2008, and (2) examines the extent to which BLM's bonds ensure complete and timely reclamation and thus prevent orphaned wells. GAO analyzed agency data on bonds and wells and interviewed BLM officials. What GAO Found The average value of bonds held by the Bureau of Land Management (BLM) for oil and gas wells was slightly lower on a per-well basis in 2018 ($2,122) as compared to 2008 ($2,207), according to GAO's analysis of BLM data. The total value of bonds held by BLM for oil and gas operations increased between these years, as did the number of wells on federal land. Bonds held by BLM have not provided sufficient financial assurance to prevent orphaned oil and gas wells (wells that are not reclaimed by their operators and, among other things, whose bonds were not sufficient to cover remaining reclamation costs, leaving BLM to pay for reclamation). Specifically, BLM identified 89 new orphaned wells between July 2017 and April 2019, and BLM offices identified to GAO about $46 million in estimated potential reclamation costs associated with orphaned wells and with inactive wells that officials deemed to be at risk of becoming orphaned in 2018. In part, bonds have not prevented orphaned wells because bond values may not be high enough to cover the potential reclamation costs for all wells under a bond, as may be needed if they become orphaned. GAO's analysis indicates that most bonds (84 percent) that are linked to wells in BLM data are likely too low to reclaim all the wells they cover. Bonds generally do not reflect reclamation costs because most bonds are set at their regulatory minimum values, and these minimums have not been adjusted since the 1950s and 1960s to account for inflation (see figure). Additionally, these minimums do not account for variables such as number of wells they cover or other characteristics that affect reclamation costs, such as well depth. Without taking steps to adjust bond levels to more closely reflect expected reclamation costs, BLM faces ongoing risks that not all wells will be completely and timely reclaimed, as required by law. It falls to BLM to reclaim orphaned wells, but the bureau does not assess user fees to cover reclamation costs, in part because it believes it does not have authority to do so. Providing such authority and developing a mechanism to obtain funds from operators for such costs could help ensure that BLM can completely and timely reclaim wells. What GAO Recommends Congress should consider giving BLM the authority to obtain funds from operators to reclaim orphaned wells, and requiring BLM to implement a mechanism to do so. GAO also recommends that BLM take steps to adjust bond levels to more closely reflect expected reclamation costs. BLM concurred. BLM did not concur with a proposed recommendation to develop a mechanism to obtain funds, citing lack of authority. GAO changed it to a matter for Congressional consideration.
gao_GAO-20-352
gao_GAO-20-352_0
Background Contract Types Described by the Federal Acquisition Regulation The government can choose from a wide selection of contract types to acquire the variety and volume of supplies and services agencies require to meet their needs. Contract types vary according to the degree and timing of the responsibility assumed by the contractor for the costs of performance, and the amount and nature of the profit incentive offered to the contractor for achieving or exceeding specified standards or goals. The primary contract types described by the Federal Acquisition Regulation (FAR) fall into two broad categories—cost-type and fixed- price-type—and table 1 summarizes key features of each. As illustrated in figure 1, within these categories the specific contract types range from cost-plus-fixed-fee, in which the contractor has minimal responsibility for the performance costs and the negotiated fee (profit) is fixed, to firm-fixed-price, in which the contractor has full responsibility for the performance costs and resulting profit (or loss). In between are the various incentive contracts, under which the contractor’s responsibility for the performance costs and the profit or fee incentives offered are tailored to the uncertainties involved in contract performance. For contracts with incentive fees or profits, the amount of fee or profit payable is related to the contractor’s performance, and generally involves an objective evaluation by the government of the contractor’s performance toward cost, schedule, or technical goals. Award fees, on the other hand, typically emphasize multiple aspects of contractor performance that are more subjectively assessed, such as the contractor’s responsiveness, technical ingenuity, or cost management. Furthermore, the basic types of contracts may be used in combination, with both fixed-price-type and cost-type contract line item numbers, unless otherwise prohibited. For example, a firm-fixed-price contract may have a cost-type line item for travel. The FAR states that selecting the contract type is generally a matter for negotiation and requires the exercise of sound judgment by the contracting officer. Negotiating the contract type and negotiating prices are closely related and should be considered together. The objective is for the government to negotiate a contract type and price (or estimated cost and fee) that will result in reasonable contractor risk and provide the contractor with the greatest incentive for efficient and economical performance. As also noted in the FAR, the government usually assumes greater risk in its contracts for more complex requirements, particularly those unique to the government. This is especially true for complex research and development contracts, where performance uncertainties or the likelihood of changes make it difficult to estimate performance costs in advance. Cost-type contracts are suitable for instances when uncertainties about contract performance do not allow accurate enough cost estimates to use a fixed-price-type contract—in other words, when programs choose to accept more risk. The level of risk drives the contract type chosen, with the contract then reflecting the risk of the work. DOD programs may use different contract types across the life of the MDAP. For example, DOD guidance notes that the preferred contract type for development efforts is cost-type, and requires particular consideration of fixed-price-incentive contracts for acquisitions moving from development to production. Consistent with the FAR, DOD guidance also notes that firm-fixed-price production contracts may be in the government’s best interest once costs have become stable. DOD and Congress have encouraged use of fixed-price-type contracts where appropriate. For example, DOD’s Better Buying Power initiative, which started in 2010, called for increased use of fixed-price-incentive contracts for programs transitioning from development to production. In addition, the National Defense Authorization Act (NDAA) for Fiscal Year 2017 required DOD to establish a preference for fixed-price-type contracts in the determination of contract type and specified approval requirements for use of cost-type contracts above certain dollar thresholds. Congress has also limited DOD’s ability to use cost-type contracts to acquire production units absent congressional notification. Our prior work contains many recommendations related to incentive-type contracts. For example, in March 2017 we recommended that the Navy remind contracting officials to follow guidance on documenting the rationale for using fixed-price-incentive contracts, and in April 2017, the Navy issued a memorandum addressing this issue. In July 2017 we recommended that DOD collect and analyze data to determine the extent to which incentive contracts achieved desired outcomes. While DOD agreed with the recommendation and developed a template for the military departments to use to collect relevant information, it is still gathering updates from the military departments about the status of this effort. Contracting for Major Defense Acquisition Programs DOD acquires MDAPs through the Defense Acquisition System, which implements an adaptive acquisition framework that allows DOD officials to develop acquisition strategies and employ acquisition processes that match the characteristics of the capability being acquired. The pathway for acquiring major capabilities generally includes four phases, three of which we focus on in this report: (1) technology maturation and risk reduction; (2) engineering and manufacturing development; and (3) production and deployment. Programs typically complete a series of milestone reviews and other key decision points that authorize entry into a new acquisition phase, as illustrated in figure 2. These milestones also typically mark critical contract award decisions. For example, the Milestone B decision commits the resources, including authorizing award of the program’s development contract, needed to conduct development leading to production. Milestone C represents the decision to move forward with initial production, including award of the initial production contract. A number of officials and agencies are involved in DOD’s choice and monitoring of MDAP contracts. Milestone decision authority: The designated individual with overall responsibility for the program who, at the time of key milestone reviews, approves the acquisition strategy with specified contract types. In approving the acquisition strategy, this individual must ensure that the strategy considers how to manage risk and how the contract type selected relates to the level of program risk in each acquisition phase. This individual is to use the acquisition strategy to assess the viability of the proposed approach, ensuring that it clearly explains how it is to be implemented with available resources, and is tailored to address program requirements and constraints. Milestone decision authority for most MDAPs now resides with the military departments following a reform enacted in the NDAA for Fiscal Year 2016. Prior to this reform going into effect, a position within the Office of the Secretary of Defense typically served as the milestone decision authority for MDAPs until they entered the production and deployment phase. Following a reorganization of the Office of the Secretary of Defense enacted in the NDAA for Fiscal Year 2017, the USD(A&S) now serves as milestone decision authority for a small number of MDAPs, such as the F-35 program. For other MDAPs, the following officials serve as milestone decision authority within the military departments: the Assistant Secretary of the Air Force (Acquisition, Technology, and Logistics); the Assistant Secretary of the Army (Acquisition, Logistics, and Technology); and the Assistant Secretary of the Navy (Research, Development, and Acquisition). Program manager: The designated individual with responsibility for and authority to accomplish program objectives for development, production, and sustainment to meet user operational needs. The program manager plans acquisition programs, prepares programs for key decisions, and executes approved acquisition and product support strategies. Contracting officer: The individual with the authority to enter into, administer, or terminate contracts and make related determinations and findings. Contracting officers are responsible for ensuring performance of all necessary actions for effective contracting, ensuring compliance with the terms of the contract, and safeguarding the interests of the United States in its contractual relationships. In order to perform these responsibilities, contracting officers are allowed wide latitude to exercise business judgement. Defense Contract Management Agency (DCMA): The entity that provides contract administration services for most DOD buying activities. Its contract management offices work with defense contractors to help ensure they deliver goods and services that meet performance requirements on time and at projected cost. Supervisor of Shipbuilding, Conversion and Repair (SUPSHIP): The entity that is the Navy’s on-site technical, contractual, and business authority for the construction of Navy ships. SUPSHIPs are co-located with the nation’s major shipbuilders and oversee the construction of every Navy ship, from patrol craft to the Navy’s most complex surface combatants and nuclear submarines and aircraft carriers. In addition to serving as milestone decision authority for certain MDAPs, USD(A&S) is responsible for improving outcomes by gathering and distributing best practices and lessons learned across the military departments. One such mechanism related to contract type choice, established in 2008, was mandatory preaward peer review—conducted by DPC, an office within USD(A&S)—for solicitations and contracts valued at over $1 billion and noncompetitive procurements over $500 million. For these competitive procurements, DPC conducted phased peer reviews prior to three events—issuance of the solicitation, issuance of the request for final proposal revisions, and contract award. The peer review teams—composed of senior DOD contracting leaders and officials from other military departments, and whenever possible comprising the same personnel across the three phases—discussed contract type and structure, and reviewed key program documentation such as acquisition strategies. Upon completion of a review, the team provided its findings and recommendations to the contracting officer, among other officials. However, in August 2019, DPC announced that it would no longer conduct peer reviews for most competitive procurements above $1 billion. Further details of this change are discussed later in this report. While the individual military departments have distinct requirements for the weapon systems they acquire, they also on occasion procure similar types of platforms, and use the same relatively small pool of contractors. For example, the Air Force and Navy both purchase fighter aircraft, and all three military departments buy missile systems. In 2019, we analyzed the 183 major development and procurement contract awards for MDAPs reported by DOD at that time, and found that almost half went to five corporations and entities connected with them, constituting 72 percent of the dollars associated with those contracts. Small Proportion of Obligations for Major DOD Acquisitions Since 2011 Was on Cost-Type Contracts and Level Varied across Military Departments From fiscal year 2011 through fiscal year 2019, a small proportion—an average of less than one-fifth—of obligations for programs in DOD’s portfolio of MDAPs was on cost-type contracts, although this proportion varied across the military departments. The remainder were on fixed- price-type contracts, split between firm-fixed-price and fixed-price- incentive, as illustrated in figure 3. Figure 4 illustrates the proportion of obligations by contract type for each of the military departments across the 9-year period. The Air Force made the most use of cost-type contracts, at an average of around one-quarter of obligations. While the Army made the least use of cost-type contracts, it made the most use of firm-fixed-price contracts. The Navy made the most use of fixed-price-incentive contracts. We have previously reported that the Navy has generally used cost-type contracts for lead ships and fixed-price-incentive contracts for follow-on ships. Choice of Cost-Type Contracts Informed by Program Risk and Subject to Additional Risk-Based Monitoring We found that the choice of cost-type contracts for MDAPs by contracting officers is based on assessments of program risk and uncertainty, underpinned by a number of statutory, regulatory, and policy provisions. Risk assessment also drives the application of additional reporting and surveillance requirements—designed to help the program office monitor cost and schedule performance—once DOD has awarded a cost-type contract for an MDAP. Choice of Cost-Type Contracts Is Based on Consideration of Program Risk and Uncertainty A range of statutory, regulatory, and policy provisions emphasize the importance of considering program risk and uncertainty when planning acquisitions and determining contract types for MDAPs. These provisions guide the decisions of contracting officers when choosing contract type and establish documentation requirements such as acquisition strategies. Table 2 describes key provisions related to program risk and uncertainty. Contracting and program officials, among others, collaborate and determine the appropriate contract type based on assessments of risk, considering factors such as availability of historical contract information, use of new technologies, cost stability, and the level of definition of requirements, such as software. In arriving at these determinations, officials we met with noted the importance of contracting officers having experience using a range of contract types. The seven MDAP cost-type contracts included in our review had documented rationales for their choice that all indicated areas of risk and uncertainty, addressing provisions noted in table 2. For example, four were development contracts, and FAR Part 35 states that the use of cost- type contracts for research and development is usually appropriate given the absence of precise specifications and difficulties in accurately estimating costs. The other three cost-type contract rationales noted that, consistent with the FAR, uncertainties in contract performance did not allow for costs to be estimated with sufficient accuracy to use a fixed- price-type contract. Table 3 summarizes these rationales. Additional Risk-Based Reporting Requirements for Cost-Type Contracts Designed to Help Programs Monitor Cost and Schedule Performance Contract types that shift more risk onto the government—including cost- type contracts—and exceed certain dollar thresholds have additional contractual reporting requirements. These requirements are designed to help the program office to monitor cost and schedule performance. In order to receive a cost-type or incentive contract valued at $20 million or more, a contractor must have an earned value management (EVM) system that complies with certain guidelines. These systems integrate the scope of work with cost, schedule, and performance elements to support project planning. They also provide program offices with monthly contract performance reports that include cost and schedule status and risks. Our prior work contains recommendations related to DOD’s use of EVM. For example, in 2009 we recommended that DOD modify policies governing EVM to ensure they addressed a number of weaknesses we had identified. In response, DOD developed and incorporated into its program management curricula a new EVM training course. Among the duties of two specialized government contract administration agencies—DCMA and SUPSHIP—are the review and approval of contractor EVM systems, and ongoing surveillance of data generated by the systems. The regular reports provided to program offices by these agencies include EVM data and analysis and highlight areas of concern and contract performance risk. In addition to use of EVM data, contracting officials from the seven cost- type MDAP contracts included in our review noted the importance of regular interactions between DOD—whether the program office, DCMA, or SUPSHIP—and the contractor in order to proactively identify drivers of cost or schedule overruns. These interactions can range from day-to-day tracking to comprehensive quarterly reviews. Several officials also noted the importance of having DCMA and SUPSHIP representatives on-site at contractor facilities, overseeing the contract and communicating with the contractor. Program Outcomes Vary Regardless of Contract Type but Correspond to the Use of Knowledge to Reduce Risk Our analysis of program cost and schedule outcomes for 21 MDAPs did not find a clear relationship between these outcomes and the contract type used. DOD’s current portfolio of MDAPs contains a total of 85 programs. The 21 MDAPs in our review are the non-shipbuilding subset of the 85 that, as of January 2019, had completed system development, held a critical design review, and started production. Thus, these 21 programs are sufficiently far along the acquisition process that we can analyze their cost and schedule outcomes. We found that they demonstrated a range of cost and schedule performance, regardless of contract type chosen. Table 4 notes the contract types used for these MDAPs as well as unit cost and schedule change between each program’s first full estimate and our most recent in-depth assessment of the program as of May 2019. As reflected in the table, all but four of the MDAPs used some mix of cost-type and fixed-price-type contracts. Performance varied widely for programs using cost-type contracts at some stage, with unit cost change varying from 44 percent reduction to 183 percent growth, and schedule change varying from zero to 146 percent growth. In addition, while two of the three programs that used only fixed-price-type contracts had unit cost reductions, they also experienced schedule growth of over 40 percent. Programs generally made greater use of cost-type contracts than fixed-price-type contracts during development, and greater use of fixed-price-type contracts during procurement, as knowledge built over time. While we did not find a clear relationship between contract type and cost and schedule performance, we have found a relationship between improved outcomes and implementation of certain knowledge-based acquisition practices on these 21 programs. These are practices identified in our body of prior work that ensure a high level of knowledge is achieved at key junctures in development. We apply these practices as criteria in weapon system reviews, including our annual assessment of weapon systems. As shown in table 5 and based on analysis of the 21 programs, in general MDAPs that implemented certain knowledge practices—thus reducing risk—before the start of system development and critical design review had better unit cost and schedule outcomes than those that did not. The first such practice—completing preliminary design review before system development start—means that a program has held a review that assesses the maturity of the preliminary design, supported by the results of activities including prototyping and critical technology demonstrations. The second practice—release of at least 90 percent of drawings by critical design review—refers to the design drawings released or deemed releasable to manufacturing by that point. Our prior work has shown that establishing a sound business case is essential to achieving better program outcomes. A solid, executable business case provides credible evidence that the warfighter’s needs are valid and can best be met with the chosen concept. The business case should also demonstrate that the chosen concept can be developed and produced within existing resources such as technologies, design knowledge, funding, and time. At the heart of a business case is a knowledge-based approach, in which knowledge supplants risk over time. Establishing a business case calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and invites failure. Over the years, we have identified a number of factors that undermine business cases and drive cost and schedule overruns, several of which are illustrated in figure 5. Undesirable outcomes such as cost and schedule growth reflect decisions made to move forward with programs before the knowledge needed to reduce risk and make those decisions is sufficient. For example, we have previously found that the majority of cost growth occurs after production start, which may be a sign that programs are entering production without attaining key knowledge about technology maturity, design stability, and production readiness in preceding phases of development. The primary consequences of risk are often more time and money, and these consequences flow through the acquisition phases, with unplanned overlap—known as concurrency—in development, testing, and production. Our annual assessment of weapon systems has identified numerous examples of programs proceeding without sufficient knowledge to reduce risk, and their subsequent cost and schedule growth. These examples have included the following from among the 21 MDAPs reviewed in this report: The F-35 program started development without a match between resources and requirements and without a stable design. Critical technologies were immature, development and production occurred concurrently, and critical deficiencies were still not resolved well into production. As of May 2019, the program had experienced unit cost growth of 75 percent and schedule growth of 35 percent since its first full estimate in October 2001. The MQ-4C program did not achieve technology maturity or design stability prior to development start and critical design review, respectively, and developmental challenges delayed production start. As of May 2019, the program had experienced unit cost growth of 10 percent and schedule growth of 70 percent since its first full estimate in February 2009. The CH-53K program failed to demonstrate technology and design maturity at appropriate points earlier in system development. As of May 2019, the program had experienced unit cost growth of 21 percent and schedule growth of 60 percent since its first full estimate in December 2005. A year after the production decision for the Ground/Air Task Oriented Radar program, the Marine Corps revised the program’s reliability requirements in response to an expert panel finding that the existing requirements did not reflect operational needs, contributing to delayed full-rate production. As of May 2019, the program had experienced unit cost growth of 168 percent and schedule growth of 146 percent since its first full estimate in August 2005. We have identified and recommended solutions to these issues, including that MDAPs establish firm and feasible requirements, mature technologies, incremental acquisition approaches, and realistic cost estimates. While DOD has agreed with most of our recommendations in these areas, it has not always implemented them. As we noted in our most recent High Risk List report, as of November 2018, 88 recommendations related to DOD weapon systems acquisition remained open. Furthermore, while we had previously reported better cost performance on newer programs initiated after implementation of major acquisition reforms in 2010, more recently we found cost growth on those programs. We attributed the deteriorating performance of newer programs to the inconsistent implementation of knowledge-based acquisition practices, as the negative effects of entering development with insufficient knowledge cascade throughout the acquisition cycle. Peer Review Change in 2019 Reduced a Means for Sharing Information about Contract Choice across DOD In August 2019, DPC announced that it would no longer conduct mandatory peer reviews for competitive procurements above $1 billion, except for the small number of MDAPs for which USD(A&S) remains milestone decision authority, and other programs of special interest to USD(A&S). As part of the same announcement, DPC stated that it planned to continue to perform peer reviews for noncompetitive procurements of $500 million or more. DPC officials expect that the procurements no longer covered by DPC’s peer review will instead be covered by the military departments’ own review processes, which already address competitive procurements up to $1 billion. While these review processes exist within the military departments, there is not an active mechanism for sharing across the departments any best practices and lessons learned—including about contract choice—found in the course of the reviews. DPC does not currently have plans to address the reduced potential for information sharing resulting from this change. Figure 6 depicts key developments related to the DPC peer reviews since their establishment in 2008, including the last update to an online compendium—a tool designed to share best practices, lessons learned, and recommendations from peer reviews across DOD—in 2013. According to DPC officials, updates to the compendium stopped as personnel became more familiar with the peer review process. They also noted that the change to peer reviews in 2019 resulted from resource constraints and staff reductions associated with recent acquisition reforms. The officials expect this change to reduce the number of DPC peer reviews by half to approximately 50 per year, consisting primarily of the reviews for noncompetitive procurements of $500 million or more. The peer review process was established with the following objectives: 1. to ensure that contracting officers across DOD consistently and appropriately implement policies and regulations; 2. to continue to improve the quality of contracting processes across 3. to facilitate cross-sharing of best practices and lessons learned across DOD. In support of this third objective, procedures for conducting peer reviews stated that the predecessor office to DPC would look for common trends and issues to be shared with the broader DOD contracting community, and maintain information about best practices and lessons learned on its website. This public website currently houses the online compendium, although, as noted above, the last update was in 2013. Contracting officials we met with noted the value of being able to learn from the experiences of officials in other military departments through peer reviews. For example, contracting officials on an Air Force program that had a peer review involving Navy officials stated that lessons shared by those officials reduced the time it took to subsequently execute a contract. Officials from across the military departments cited benefits that resulted from these opportunities to learn from the real-world experience of peers across DOD, including the ability to share contracting information and expertise, review cost-sharing arrangements, and recalibrate contracting decisions. The online compendium is a spreadsheet with a row for each example of feedback, with the program and officials concerned kept anonymous. Columns include the category of feedback (e.g., source selection, terms and conditions), the type of feedback (e.g., recommendation, lesson learned, best practice), and the phase of review (e.g., issuance of the solicitation). Our analysis of the compendium found that it captures practices and recommendations related to contract type, as illustrated by the following examples: Use of incentives: Consider development of cost and performance incentives, rather than use of an award fee. Different contract type: Reconsider plan to award a fixed-price- incentive contract, given historical use of a cost-plus-incentive-fee arrangement under which contractor delivered at or around target cost. Source selection: Throughout solicitation for an award combining firm-fixed-price and cost-type line items, tell offerors what they are expected to provide and how they will be evaluated, and document that evaluation occurred in this exact way. Officials from the military departments confirmed that they are aware that they will now be expected to perform the reviews that DPC previously conducted. They have taken steps to adjust procedures accordingly, including updating their acquisition regulations as necessary. However, DPC does not currently have plans to encourage sharing of findings from military department-level reviews across the departments. For example, there are no plans to solicit updates to the online compendium or a similar centralized resource. USD(A&S) is responsible for improving acquisition results—including cost, schedule, and performance—by gathering and distributing data, best practices, and lessons learned across the military departments. Without a centralized resource for sharing findings, and as most reviews transition to the military departments, it will become more difficult for USD(A&S) to identify contracting trends across DOD and perform this assigned role. An updated compendium or other centralized resource could help contracting officials continue to learn from the experiences of peers across DOD—including when acquiring similar platforms and from similar contractors—by exposing them to good practices for structuring contracts and prompting consideration of alternative contract types. Conclusions With DPC conducting fewer peer reviews and no updates to the compendium since 2013, contracting officials might not have insight into how other programs across DOD structure contracts. As the reviews will now primarily occur within the military departments, these officials could lose exposure to alternative contracting approaches suitable for their programs. A centralized resource such as the compendium takes on a new significance as a means for sharing information between the military departments as they proceed with their own peer reviews. USD(A&S) remains well-positioned to facilitate information exchange and contribute to positive program outcomes by requiring the military departments to share the findings of their peer reviews. Recommendation for Executive Action The Under Secretary of Defense for Acquisition and Sustainment should establish procedures requiring the military departments to collect and share findings from their peer reviews of MDAP contracting approaches— including choice of contract type—such as by updating the existing online compendium of best practices and lessons learned as they complete their reviews. Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. DOD concurred with our recommendation and provided written comments, which are reprinted in appendix II. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report addresses: (1) the extent to which the Department of Defense (DOD) uses cost-type contracts for major defense acquisition programs (MDAP); (2) how DOD chooses among cost-type and other contract types for MDAPs and monitors their cost and schedule performance; (3) the range of cost and schedule outcomes across MDAPs that used cost-type contracts; and (4) the extent to which DOD shares information about choosing MDAP contract types across the military departments. To assess the extent to which DOD uses cost-type contracts for MDAPs, we analyzed Federal Procurement Data System-Next Generation (FPDS- NG) data regarding obligations by contract type from fiscal year 2011 through fiscal year 2019 on contracts for programs in DOD’s MDAP portfolio awarded from fiscal year 2010 through fiscal year 2018. These data reflect programs that were part of DOD’s MDAP portfolio and contracts that were reported in Selected Acquisition Reports at any point during this period. The basic types of contracts may be used in combination, with both fixed-price-type and cost-type contract line item numbers, unless otherwise prohibited. Per the Defense Federal Acquisition Regulation Supplement (DFARS) Procedures, Guidance, and Information, when entering contract type information info FPDS-NG, the data entrant is to choose the contract type that is applicable to the predominant amount of the contract action, based on the value of the line items; the selected contract type automatically populates any subsequent contract action reports for modifications. We aggregated obligations on orders under indefinite-delivery contracts and basic ordering agreements by contract type for each fiscal year. We used the Defense Acquisition Management Information Retrieval (DAMIR) system to identify those contracts reported in Selected Acquisition Reports for programs in the MDAP portfolio awarded from fiscal year 2010 through fiscal year 2018. Our dataset includes only obligations on MDAP contracts awarded since fiscal year 2010 due to problems identified in a prior GAO report regarding how data on contract types were reported in FPDS-NG for contracts awarded prior to that date. Specifically, prior to fiscal year 2010, data entrants could select the contract types “combination” and “other”, or not enter a contract type at all. The Office of Federal Procurement Policy subsequently removed those contract types as options in FPDS-NG, and made completion of the field mandatory. Contracts retain their original designation in FPDS-NG when modifications to those contracts are subsequently made. Therefore, in order to avoid including contracts coded as “combination” or “other”, we limited our analysis to contracts awarded since fiscal year 2010. We assessed data reliability by comparing the contract types identified in FPDS-NG for each contract with information on contract types contained in DAMIR and in another DOD database—Earned Value Management- Central Repository—and determined the data were sufficiently reliable for the purposes of analyzing the extent of DOD’s use of cost-type contracts for MDAPs. Contractors for programs with earned value management (EVM) reporting requirements submit EVM data to Earned Value Management-Central Repository. EVM reporting is generally required for cost-type or incentive contracts valued at $20 million or more. We included obligations associated with contract types contained in FPDS- NG if they matched contract types contained in either DAMIR or Earned Value Management-Central Repository. When there was no match with either source, we reviewed the narrative discussion of contract types contained in Selected Acquisition Reports in order to determine the most appropriate contract type with which to label those obligations. To assess how DOD chooses among cost-type and other contract types for MDAPs and monitors their cost and schedule performance, we reviewed relevant statutes, regulations, and policies. We analyzed documentation and interviewed officials regarding contract choice and monitoring from the following DOD and military department offices and selected contracting commands: Under Secretary of Defense for Acquisition and Sustainment Acquisition, Analytics and Policy Defense Pricing and Contracting Cost Assessment and Program Evaluation Defense Contract Management Agency Deputy Assistant Secretary of the Air Force for Contracting Deputy Assistant Secretary of the Army for Procurement Deputy Assistant Secretary of the Navy for Procurement Air Force Materiel Command Space and Missile Systems Center Marine Corps Systems Command Naval Air Systems Command Naval Information Warfare Systems Command Naval Sea Systems Command As illustrative examples of contract choice and monitoring under a variety of conditions, including different military departments and appropriation types, we also selected a nongeneralizable sample of seven MDAP contracts. Specifically, we selected for each of the three military departments the most recently awarded cost-type MDAP Research Development, Test, and Evaluation contract and the most recently awarded cost-type MDAP Procurement contract as reported in the December 2017 Selected Acquisition Reports. We also selected the most recently awarded cost-type MDAP contract for the Marine Corps. Table 6 notes the selected MDAPs and contracts, as well as the milestone decision authority responsible for approving the acquisition strategy associated with that contract. We interviewed contracting officials for these programs and reviewed key documentation such as acquisition strategies relating to each one of these contracts. We also reviewed our past work related to contract types used for MDAPs, including DOD’s use of incentive contracts and the Navy’s use of fixed-price-incentive contracts for shipbuilding. To assess the range of cost and schedule outcomes across MDAPs that used cost-type contracts, we identified the contract types as reported in DAMIR or GAO’s April 2018 and May 2019 annual assessments of weapon systems for 21 non-shipbuilding MDAPs that as of January 2019 had completed system development, held a critical design review, and started production. Table 7 notes the 21 MDAPs, as well as the dates of their first full estimate, and their most recent individual assessment by GAO as of May 2019. We compared the contract types reported in DAMIR or GAO’s annual assessments of weapon systems with the percentage unit cost and schedule change between the first full estimate and our most recent in- depth assessment of each program as of May 2019. Since 2018, as part of our annual assessment of weapon systems, we have conducted a statistical analysis evaluating programs’ completion of knowledge-based acquisition practices and corresponding performance outcomes. Our report cites results of this analysis as it pertains to these 21 MDAPs. We reviewed prior GAO work on the drivers of cost and schedule overruns for MDAPs. To assess the extent to which DOD shares information about choosing MDAP contract types across the military departments, we reviewed DOD and military department documentation related to contracting review processes. We compared this information to DOD memorandums establishing practices and policies for sharing of acquisition information across DOD. We also interviewed officials from offices including Defense Pricing and Contracting within the Office of the Under Secretary of Defense for Acquisition and Sustainment, and the cognizant Deputy Assistant Secretaries of the military departments. We conducted this performance audit from February 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Shelby S. Oakley, (202) 512-4841 or oakleys@gao.gov In addition to the contact named above, Raj Chitikila (Assistant Director), Robert Bullock, Jenny Chanley, Jasmina Clyburn, Andrea Evans, Lori Fields, Suellen Foth, Kurt Gurka, Stephanie Gustafson, and Grace Haskin made key contributions to this report.
Why GAO Did This Study When acquiring major weapon systems, DOD can choose between several different contract types. One of these is cost-type, under which DOD pays allowable costs incurred by the contractor. Historically, DOD has struggled to manage its major acquisition programs. The result has been billions in cost growth and schedule delays in providing systems to the warfighter. GAO was asked to review DOD's use of cost-type contracts for its major acquisition programs. This report addresses the use of and range of cost and schedule outcomes for cost-type contracts for major weapon system acquisitions, and how military departments share information about contract choice. GAO analyzed government contracting data on obligations by contract type for fiscal years 2011 through 2019 on contracts in DOD's portfolio of major acquisition programs. GAO compared contract types for 21 major acquisition programs with their cost and schedule outcomes; reviewed seven recently awarded cost-type contracts for major acquisition programs, selected to reflect the different military departments and appropriation types; and interviewed contracting officials. What GAO Found To acquire new major weapon systems, such as aircraft, ships, and satellites, the Department of Defense (DOD) uses a variety of contract types including cost-type contracts, under which the government assumes more risk. DOD is required to document its risk assessment in choosing contract types for major programs. Risks assessed can include use of new technologies and stability of system costs and requirements. Once awarded, cost-type contracts have additional reporting requirements to help monitoring of cost and schedule performance. GAO analyzed program cost and schedule outcomes for 21 major acquisition programs, and did not find a clear relationship between these outcomes and contract types used. However, programs that completed certain knowledge-based acquisition practices generally had better cost and schedule outcomes than programs that did not implement those practices. These practices include completing preliminary design review before the start of system development and releasing at least 90 percent of design drawings by critical design review. From fiscal years 2011 through 2019, DOD used cost-type contracts for a small proportion—under one-fifth on average—of obligations for its major acquisition programs. This proportion varied across the military departments (see figure). A change to DOD's peer review process for its largest contract awards reduced a means for sharing best practices and lessons learned about contract choice across the military departments. In 2019, the Office of the Secretary of Defense announced the end of its peer reviews for most competitive procurements above $1 billion. While these contracts will instead be reviewed through the military departments' own processes, DOD currently does not require the departments to collect and share their findings. DOD has an online compendium of peer review findings; however, this was last updated in 2013. Using an existing centralized resource such as the compendium could help contracting officials learn from the experiences of peers across DOD by exposing them to good practices for structuring contracts. What GAO Recommends GAO recommends that DOD establish procedures requiring the military departments to collect and share findings from their reviews of contracting approaches, such as by updating the existing online compendium. DOD agreed with GAO's recommendation.
gao_GAO-19-453
gao_GAO-19-453_0
Background Extreme Weather and Climate Change Effects According to the National Research Council, although the exact details cannot be predicted with certainty, climate change poses serious risks to many of the physical and ecological systems on which society depends. Moreover, according to key scientific assessments, the effects and costs of extreme weather events such as floods and droughts will increase in significance as what are considered rare events become more common and intense because of climate change. According to the National Academies of Sciences, Engineering, and Medicine, extreme weather events are directly traceable to loss of life, rising food and energy prices, increasing costs of disaster relief and insurance, fluctuations in property values, and concerns about national security. Table 1 shows seven effects commonly associated with climate change that DOD has documented. Sources of Climate Information and Projections According to a 2010 National Research Council report on making informed decisions about climate change and our October 2009 report on climate change adaptation, most decision makers need a basic set of information to understand and make choices about how to adapt to the effects of climate change. This set of information includes information and analysis about observed climate conditions, information about observed climate effects and vulnerabilities, and projections of what climate change might mean for the local area. In November 2015, we found that in order for climate information to be useful, it must be tailored to meet the needs of each decision maker, such as an engineer responsible for building a bridge in a specific location, a county planner responsible for managing development over a larger region, or a federal official managing a national-scale program. Agencies across the federal government collect and manage many types of climate information, including observational records from satellites and weather monitoring stations on temperature and precipitation, among other things; projections from complex climate models; and tools to make this information more meaningful to decision makers. For example, the Fourth National Climate Assessment, completed in November 2018 by the U.S. Global Change Research Program, references various sources of climate information, including projected temperature and precipitation data. Likewise, in 2016, a multi-agency group led by the Strategic Environmental Research and Development Program (SERDP) developed a report and accompanying database of future sea level projections and extreme water levels, which as of May 2019 contained sea level change projections for 1,813 DOD sites worldwide. Climate projections are typically a range of possible future scenarios for particular time frames. Multiple future scenarios allow for planners and engineers to see a range of possible conditions that could occur at various points in time. For example, a planner or engineer could consider four different future scenarios occurring over the course of 20, 40, or 60 years or over the service life of the project being designed. Figure 1 shows an example of sea level change projections provided by the National Oceanic and Atmospheric Administration (NOAA). Specifically, the chart shows historical mean sea levels and multiple scenarios of projected relative sea level rise in Norfolk, Virginia. The chart shows the historical annual mean sea level from 1960 to 2018 through the bold black line. The projections use 2000 as a starting point, and so overlap with the historical data. Relative sea level rise takes into account changes in land levels—in the Norfolk area the land is generally subsiding over time. Each scenario is based on different assumptions about future greenhouse gas emissions, according to an official from NOAA’s National Ocean Service. Planners and engineers can use the multiple scenarios to evaluate when potential effects could occur and determine their risk tolerances to inform their planning or design choices. Figure 2 similarly shows the same historical mean sea levels at Norfolk, Virginia, as well as the very likely range of projections of future relative sea levels, according to the National Ocean Service. This chart shows the range of possibilities considered very likely—those between the low and intermediate scenarios in figure 1—according to an official from NOAA’s National Ocean Service. Installations’ Processes for Master Planning and Project Design Installation Master Planning Process Master planning for military installations involves the evaluation of factors affecting the present and future physical development and operation of a military installation. DOD requires all installations to develop master plans. DOD’s instruction on real property management states that plans must be based on a strategic assessment of the operational mission and expected use of the installation. The plans must cover at least a 10-year period and be updated every 5 years, or more often if necessary. The plans must include lists, by year, of all construction projects, major repair and sustainment projects, and restoration and modernization projects needed within the time period covered by the plan. Design Standards for Individual Facilities Projects Individual DOD facilities projects within installations must be designed in accordance with DOD’s facilities design standards, which are defined in the Unified Facilities Criteria. Unified Facilities Criteria are technical manuals and specifications used for planning, design, construction, maintenance, and operations of all DOD facilities projects. The U.S. Army Corps of Engineers, Naval Facilities Engineering Command, and the Air Force Civil Engineer Center are responsible for administering and updating the Unified Facilities Criteria. The Unified Facilities Criteria include a core group of 27 standards that apply to building systems found in most DOD facility construction projects, and include standards such as architecture, roofing, and civil engineering. Engineers and planners apply the criteria that are most appropriate for their individual facilities projects to their project proposals and designs. Table 2 shows excerpts from requirements and guidance to project designers in the Unified Facilities Criteria relevant to the consideration of climate. Table 2. Excerpts from Unified Facilities Criteria Requirements and Guidance on Consideration of Climate Excerpt consider site-specific, long-term, climate change impacts such as drought, flood, wind, and wildfire risks. Knowing the probable wind speed and direction in a particular month can be helpful in construction and mission planning as well as in designing structures that experience severe wind-driven rain or drifting snow. Pumps, piping, and equipment must be protected from the weather. In cold climates pumps and piping must be protected from freezing temperatures. The pump station building must comply with 1-200-01 , be constructed of noncombustible materials and meet applicable building standoff distances. In new construction, the roof system selection is an integral part of the overall building design and must take into account interior building usage and climate. For example, the building can be designed to prevent outward moisture drive, support heavy roof systems (such as garden roofs or paver systems), or sloped for the desired durability (life cycle cost benefit) and aesthetic considerations. Building shape, orientation, and design must utilize the site seasonal environmental factors to minimize annual facility energy use and to optimize daylighting. Coordinate building and glazing orientation and architectural shading with seasonal solar angles and prevailing winds to enhance energy performance of the building within the site-specific micro climate. Streets, paved parking lots, roofs, and other impermeable surfaces allow no infiltration of runoff and provide little resistance to flow. Runoff draining from these surfaces can be highly concentrated and move at a velocity greater than runoff flowing over an unpaved surface. Soils must be protected from this erosive force, particularly at the edges of impermeable surfaces and soils. 11988 directs all Federal agencies to avoid floodplain development wherever there is a practicable alternative. When development within the floodplain is considered, evaluate alternative site locations to avoid or minimize adverse impacts to the floodplain. When mission needs require siting a building within or partially within the 100-year floodplain, indicate…the base flood elevation…and the minimum design flood elevation…. DOD Infrastructure Costs Associated with Extreme Weather and Climate Change Effects Extreme weather and climate change effects can damage infrastructure, requiring repairs and resulting in budgetary risks (i.e., costs) to DOD. While no individual weather event can be definitively linked to climate change, particular weather events can demonstrate the vulnerability of military facilities. For example, in October 2018, Hurricane Michael devastated Tyndall Air Force Base in Florida, shutting down most base operations until December; causing severe damage to the flight line, drone runway, and other base facilities including family housing; and destroying the base’s marina. The Air Force estimates that repairs at the base will cost about $3 billion and take 5 or more years to complete. Camp Lejeune and Marine Corps Air Stations Cherry Point and New River in North Carolina sustained heavy damage to facilities, housing, and training locations from Hurricane Florence in September 2018. The Marine Corps estimates that the recovery from the hurricane damage will cost about $3.6 billion and take years to complete. In 2014, we reported that more frequent and more severe extreme weather events and climate change effects may result in increased fiscal exposure for DOD. In the same report, officials provided examples of costs associated with extreme weather and climate change effects at DOD facilities. For example, officials from a Navy shipyard we visited stated that the catastrophic damage that could result from the flooding of a submarine in dry dock could cause substantial repair costs. In 2017, we found that DOD installations overseas face operational and budgetary risks posed by weather events and climate change effects at the military services’ installations in each of DOD’s geographic combatant commands. We recommended that the Secretaries of the Army, Navy, and Air Force work with the Office of the Secretary of Defense to issue a requirement to their installations to systematically track the costs associated with extreme weather events and climate change effects. DOD did not concur with this recommendation. In its response, DOD stated that tracking impacts and costs associated with extreme weather is important, but that the science of attributing these events to a changing climate is not supported by previous GAO reports. DOD also stated that associating a single event with climate change is difficult and does not warrant the time and money expended in doing so. However, as we stated in our response to DOD’s comments, installations generally have the capability to track the costs associated with extreme weather events, which are projected to become more frequent and intense as a result of climate change. There is substantial budgetary risk resulting from weather effects associated with climate change, and these types of repairs are neither budgeted for nor clearly represented in the federal budget process. As of April 2019, the military departments have not implemented this recommendation. Some Installations Have Integrated Extreme Weather and Climate Considerations in Master Plans or Related Installation Planning Documents, but They Have Not Consistently Assessed Climate Risks or Used Climate Projections in These Plans Some Installations Have Integrated Extreme Weather and Climate Considerations into Their Master Plans or Related Installation Planning Documents Fifteen of the 23 installations we visited or contacted had integrated some considerations of extreme weather or climate change effects into their plans. For example, Langley Air Force Base, Virginia, partnered with the City of Hampton, Virginia, to study the effects of sea level rise. A 2018 addendum to the installation’s 2010 joint land use study with the City of Hampton outlined climate vulnerabilities and identified recommendations for actions to increase installation resilience. Separately, after sustaining damage from Hurricane Isabel in 2003, the installation required all new development to be constructed to a minimum elevation of 10.5 feet above sea level, higher than the flooding associated with the hurricane and one foot higher than the flooding anticipated from a storm with a 1-in-500 chance of occurring in any given year. As DOD noted in its January 2019 report to Congress on climate-related vulnerabilities, Joint Base Langley-Eustis, of which Langley Air Force Base is a part, has experienced 14 inches in relative sea level rise since 1930, due in part to land subsidence, and has experienced more frequent and severe flooding as a result. The 611th Civil Engineer Squadron, based at Joint Base Elmendorf- Richardson in Alaska, partnered with the University of Alaska, Anchorage, to develop site-specific predictive models of coastal erosion for two radar sites on the North Slope of Alaska. The squadron plans to use this information in the future to develop possible alternative facilities projects to address the erosion risks. Squadron officials told us they consulted with the military users of the radars to determine the length of time to plan for their continued use and that they intend to use this information to develop plans to address this coastal erosion. The North Slope radar sites are experiencing greater than anticipated coastal erosion rates, which have begun to threaten the infrastructure supporting the sites. Fort Irwin, California, in response to severe flash flooding in 2013 that caused loss of power and significant damage to base infrastructure, worked with the U.S. Army Corps of Engineers to develop a plan to improve stormwater drainage. The 2014 plan recommended a series of infrastructure projects, some of which Fort Irwin has implemented; others remain to be implemented, depending on the availability of funding. Figure 2 depicts flooding damage in 2013 at Fort Irwin and a stormwater diversion channel subsequently built by the installation. The flash flooding on the installation caused damage to roads and other facilities throughout the installation, according to officials. The installation subsequently raised berms and built other structures, such as the diversion channel shown in figure 3, to divert stormwater from installation facilities. Marine Corps Recruit Depot Parris Island, South Carolina, reported that the installation plans to award a contract to study sea level rise at the installation and incorporate the results into the next iteration of its master plan. The installation stated that incorporating the study’s results is included in the scope of work for the contract that has been awarded for the master plan update. Naval Station Norfolk, Virginia, noted in its 2017 master plan that climate change and sea level rise are expected to exacerbate effects to the installation from tidal flooding and storm surge, increasing risks to installation assets and capabilities. The plan established a goal of identifying measures that could minimize the effect of sea level rise on the installation. With the majority of the installation near mean sea level, Naval Station Norfolk is vulnerable to frequent flooding that is disruptive to operations. Figure 4 depicts flooding at Naval Station Norfolk. Installation officials told us that such floods can interfere with traffic on base, thus reducing the ability of those working on the installation to transit within, to, and from the base. Naval Base San Diego, California, noted in its most recent master plan that local climate change effects include water and energy shortages, loss of beaches and coastal property, and higher average temperatures, among others. The plan also stated that Naval Base San Diego should be funded to conduct a study to determine installation-specific effects of sea level rise. Navy Region Southwest subsequently partnered with the Port of San Diego to study local effects of sea level rise, which installation officials said will help them understand the effects of sea level rise on the base. Camp Lejeune, North Carolina, participated in a study of the effects of sea level rise on the installation and on certain other DOD installations in North Carolina and Florida. An installation official stated that installation officials have used the results of the study to make planning decisions, in particular by feeding the study data into the installation’s mapping of potential flood zones. The 10-year study, which concluded in 2017, was funded by SERDP and was based at Camp Lejeune to, among other things, understand the effects of climate change at Camp Lejeune. Camp Lejeune officials and one of the scientists involved in the study told us that installation officials have used the study’s results to make decisions about where to site buildings so as to take into account the possible future condition of marshes on the base. However, 8 of the 23 installations we visited or contacted had not integrated considerations of extreme weather or climate change effects into their master plans or related installation planning documents. For example, Joint Base Pearl Harbor Hickam, Hawaii, did not consider extreme weather and climate change effects in its most recent master plan, although it is located in an area that has been subject to tropical storms and where, according to projections in the DOD database of sea level change scenarios, further sea level rise is anticipated. Specifically, under the highest scenario in the database, sea level at Naval Station Pearl Harbor, part of the joint base, could rise more than 3 feet by 2065. The lowest elevation point on the base is 0.6 feet below sea level. The installation stated that it plans to incorporate the effects of climate change into the next update to its facilities master plan. Pearl Harbor Naval Shipyard, Hawaii, did not consider extreme weather or climate change effects in its most recent master plan, although it is co-located with Joint Base Pearl Harbor Hickam and therefore shares the same weather and climate conditions noted previously. Fort Wainwright, Alaska, officials told us they had not considered climate change as part of the installation’s master planning. Officials noted that the majority of the base is on thaw-stable permafrost that would be unlikely to be significantly affected by rising temperatures, but some areas of the base are on less stable permafrost. DOD noted in its January 2019 report to Congress that thawing permafrost can decrease the structural stability of buildings and other infrastructure that is built on it. Camp Pendleton, California, officials told us that although they are aware of a variety of climate-related challenges to their installation and have taken or plan to take some steps to address them, an example of which we discuss later in this report, the installation has not yet considered extreme weather and climate change effects in its master plan. The officials stated that they are still planning based on historical conditions rather than considering possible future conditions. DOD’s Unified Facilities Criteria standard specific to master planning states that where changing external conditions affect planning decisions, master planners should seek to understand, monitor, and adapt to these changes, including changes in climatic conditions such as temperature, rainfall patterns, storm frequency and intensity, and water levels. DOD’s directive on climate change adaptation further states that military departments should integrate climate change considerations into their plans. The directive also states that the Assistant Secretary of Defense for Energy, Installations, and Environment should consider climate change adaptation and resilience in the installation planning process, including the effects of climate change on both built and natural infrastructure. Our findings based on the 23 installations we reviewed for this report are consistent with our prior reports on extreme weather and climate change effects at military installations. Specifically, installations have not consistently integrated these considerations into their master plans or related installation planning documents. In May 2014, we reported that some domestic installations had integrated considerations of changing climatic conditions into their installation planning documents, but DOD had not provided key information—such as how to use climate change projections—to help ensure that efficient and consistent actions would be taken across installations. We recommended that DOD further clarify the planning actions that should be taken in installation master plans to account for climate change, to include further information about changes in applicable building codes and design standards that account for potential climate change effects and further information about potential projected climate change effects on individual installations. However, as of January 2019, DOD had not fully implemented this recommendation. For example, as we discuss later in this report, DOD’s updates to its facilities design standards lacked guidance on the use of climate projections. DOD also had not provided information on a range of potential effects of climate change on individual installations. DOD has taken some positive steps in this area, such as making available to the military services a database of sea level change scenarios for 1,774 DOD sites worldwide. However, DOD has not provided other specific types of climate projections, which we discuss in more depth later in this report. Moreover, in November 2017 we reported that about a third of the installations in our sample of overseas installations had integrated climate change adaptation into their installation plans, but the lack of key guidance and updated design standards to reflect climate change concerns hampered their ability to consistently incorporate climate change adaptation into their plans. We recommended, among other things, that the military departments integrate climate change data and projections into DOD’s facilities criteria and periodically revise those standards based on any new projections, as appropriate. DOD partially concurred, and as of January 2019, an official from the Office of the Assistant Secretary of Defense for Sustainment stated that the office was continuing to work with the military departments to evaluate how to effectively translate the latest climate data into a form usable by installation planners and facilities project designers. Based on our findings for this review, we continue to believe that DOD should take all necessary steps to implement these recommendations. Installations Have Not Fully Assessed Risks from Extreme Weather and Climate Change Effects in their Master Plans and Related Installation Planning Documents While 15 of the 23 installations we visited or contacted had integrated some consideration of extreme weather or climate change effects into their planning documents, only two of these installations had taken steps to fully assess the weather and climate risks to the installation or develop plans to address identified risks. DOD has taken some broad actions to assess risk to installations from extreme weather and climate change effects. For example, in January 2018, DOD issued a report to Congress on the results of its survey of installations on the extent to which they faced a variety of extreme weather or climate effects. However, the survey responses constituted a preliminary assessment and were based on installations’ reporting of negative effects they had already experienced from extreme weather effects, rather than assessments of all future vulnerabilities based on climate projections. DOD noted that the information in the survey responses is highly qualitative and is best used as an initial indicator of where a more in-depth assessment may be warranted. However, except for two of the installations in our sample, the installations’ master plans and related installation planning documents did not (1) identify a range of possible extreme weather events and climate change effects that could affect the installation, (2) assess the likelihood of each event occurring and the possible effect on the installation, and (3) identify potential responses to these events. For example, Naval Air Station Key West, Florida, included discussion of the effects of sea level rise and storm surge on the installation in its master plan, as well as steps it could take to mitigate these effects. However, although the installation experienced drought conditions rated severe in 2011 and extreme in 2015, its master plan does not discuss effects on the installation of drought, which, according to a DOD report to Congress, can pose significant risks to an installation, including implications for base infrastructure. All of the Air Force installations in our sample rated their degree of vulnerability to a range of climatic conditions—such as flood, temperature rise, and precipitation pattern changes—in their master plans, thereby identifying a range of possible climate events and the likelihood of each event. However, of those installations that identified a range of possible extreme weather and climate change effects that could affect the installation, most did not consistently identify potential responses to these events. The two exceptions—Eglin Air Force Base, Florida, and Joint Base Langley-Eustis, Virginia—took the additional step of identifying possible actions to address these climate events. For example, Eglin Air Force Base rated itself as having a high vulnerability to storm surge, but a low vulnerability from rising temperatures, and identified steps the installation could take in facilities planning and design to mitigate the identified risks. The DOD directive on climate adaptation states that military departments should assess and manage risks to both built and natural infrastructure, including changes as appropriate to installation master planning, and should assess, incorporate, and manage the risks and effects of altered operating environments on capabilities and capacity, including basing. Moreover, Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to risks related to achieving defined objectives. Risk assessment is the identification and analysis of risks related to achieving defined objectives in order to form a basis for designing responses to these risks. Our prior work has shown that assessing risks includes assessing both the likelihood of an event occurring and the effect the event would have. Agency leaders and subject matter experts should assess each risk by assigning the likelihood of the event’s occurrence and the potential effect if the event occurs. Despite a DOD directive requiring that the military departments assess and manage risks to both built and natural infrastructure, DOD has not required in the Unified Facilities Criteria standard that guides master planning that installations assess risks posed by extreme weather and climate change effects as part of their master plans or develop plans to address identified risks. Officials in the Office of the Assistant Secretary of Defense for Sustainment acknowledged that the Unified Facilities Criteria standard on master planning does not explicitly require a risk assessment specifically for extreme weather or climate change as part of the master planning process. Because installations have not consistently assessed the risks from extreme weather and climate change effects as part of their master plans or identified potential responses to identified risks, they may formulate plans and make planning decisions without consideration of those risks. By assessing and developing actions to address these risks in their master plans, installations could better anticipate exposure of the facilities to greater than anticipated damage or degradation as a result of extreme weather events or climate change effects. Installations Have Not Consistently Used Climate Projections in Developing Master Plans Eight of the 23 installations we visited or contacted, as well as the Air Force unit responsible for the North Slope radar facilities, had made some use of climate projections to incorporate consideration of extreme weather and climate change effects into their master plans or related installation planning documents. For example, as noted previously, the 611th Civil Engineer Squadron was developing its own site-specific projections of coastal erosion affecting the North Slope radar sites in Alaska, and Norfolk Naval Shipyard considered local sea level rise projections in a study on mitigating flooding at its docks. However, officials from 11 of the 23 installations in our sample—including some from installations that had made some use of climate projections—cited the need for additional guidance from DOD or their military department headquarters on which projections to use in planning or on how to use them. This is consistent with our prior findings on DOD’s installation-level efforts to increase climate resilience. Our May 2014 report noted that installation officials told us they did not have the installation-level climate data from their military departments or from other DOD sources that they would need to understand the potential effects of climate change on their installations. We recommended, among other things, that DOD provide further direction on planning actions to account for climate change, including information about changes in applicable building codes and design standards and the projected effects of climate change on individual installations. DOD concurred but as of January 2019 had not fully implemented this recommendation, as noted previously. In December 2018, an official in the Office of the Assistant Secretary of Defense for Sustainment stated that DOD plans to develop a policy on the use of sea level rise projections by some time in 2019 and eventually to incorporate these projections into the Unified Facilities Criteria. However, DOD has no current time table for incorporating guidance on the use of other types of climate projections into its Unified Facilities Criteria. The official stated that the department is working toward eventually incorporating the use of other types of climate projections into guidance but that these types of projections would have to be vetted by DOD subject matter experts and approved prior to adoption. DOD intends to move in this direction, according to the official, but DOD has not yet developed a defined process for evaluating and incorporating the use of additional climate projections into guidance. Our prior work has found that using the best available climate information, including forward-looking projections, can help an organization to manage climate-related risks. Until November 2018, DOD’s Unified Facilities Criteria on master planning stated that changes in climate conditions are to be determined from reliable and authorized sources of existing data but that to anticipate conditions during the design life of existing or planned new facilities and infrastructure, installations could also consider climate projections from reliable and authorized sources, such as, among others, the U.S. Global Change Research Office and the National Climate Assessment. In November 2018, in response to a statutory requirement in the John S. McCain National Defense Authorization Act for Fiscal Year 2019, DOD updated the Unified Facilities Criteria on master planning to specify that climate projections from reliable and authorized sources, such the U.S. Global Change Research Office and the National Climate Assessment, shall be considered and incorporated into military construction designs and modifications. DOD guidance states that the Assistant Secretary of Defense for Energy, Installations, and Environment provides guidance and direction on relevant technologies, engineering standards, tools, development and use of scenarios, and other approaches to enable prudent climate change adaptation and resilience. The guidance also states that military departments are to leverage authoritative environmental prediction sources for appropriate data and analysis products to assess the effects of weather and climate. Installations have not consistently used climate projections in their master plans because DOD has not provided detailed guidance on how to do so. Simply updating the language of the Unified Facilities Criteria on master planning in November 2018 to require the use of climate projections does not provide guidance to installations on how to use climate projections, such as what future time periods to consider and how to incorporate projections involving multiple future scenarios, nor does it identify the specific types of projections to use. The absence of guidance has hindered the ability of some installations to effectively apply the best available climate projections to their installation master planning. If they do not use climate projections in their master plans, installations risk failing to plan for changing climate and weather conditions and, as a result, could expose their facilities to greater risk of damage or degradation from extreme weather events and climate change effects. Incorporating such data into planning would help installation master planners better anticipate changing climate and weather conditions and increase the effectiveness of the installation’s long-term investments in its facilities. Installations Have Designed Some Individual Facilities Projects to Increase Resilience to Extreme Weather, but They Lack Guidance on Using Climate Projections Some Installations Have Designed Individual Facilities Projects with Elements of Resilience to Extreme Weather or Climate Change Effects Eleven of the 23 installations we visited or contacted had designed or constructed one or more individual facilities projects to increase the resilience of the facilities themselves, or to increase the resilience of the installation more broadly, to extreme weather and climate change effects. For example, Joint Base Langley-Eustis, Virginia. In 2018, officials designed a project to build a maintenance hangar with a special foundation that would elevate the floor to 10 feet above the average high-water level at the project site and protect it against coastal storm flooding. Joint Base Langley-Eustis has experienced severe flooding in the past because of its low-lying geographical elevations in the Chesapeake Bay. The installation stated in its draft encroachment management action plan that the effects of climate change may exacerbate flooding issues through sea level rise or the increasing frequency and severity of storms. Norfolk Naval Shipyard, Virginia. In 2018, shipyard officials designed a project to increase the installation’s resilience to storm-induced flooding, including building a floodwall to protect the dry docks that are used to perform maintenance on ships and submarines. Norfolk Naval Shipyard experiences extreme high tides three to five times a year on average and a significant hurricane on average once a year, according to an installation presentation, and flooding has been increasing over time in the area as relative sea levels have risen. The floodwall will enclose the dry docks, providing protection to critical assets and electrical utilities while they are in dry dock, among other things. Figure 5 depicts a flooded dry dock at Norfolk Naval Shipyard, Virginia. Installation officials told us that flooding into dry docks poses risks to the ships being serviced there and to the performance of the base’s mission of servicing and maintaining Navy ships and submarines. Camp Pendleton, California. In 2018, as part of a project to construct a new aircraft landing zone, officials included protection of the nearby coastline, which had been rapidly eroding from the impact of ocean waves and rain storms. According to officials, the erosion has accelerated in recent years and has threatened not only landing zones along the coast, but also beaches that are used for amphibious assault training. Figure 6 depicts coastal erosion near a landing zone at Camp Pendleton, California. According to officials, the erosion leading to the gulley shown in the photograph has accelerated in recent years and advances further inland every year; it is now within feet of the landing zone. The officials told us that the erosion can threaten the function of the landing zone if it reaches that site. Fort Shafter, Hawaii. In 2016, officials constructed flood mitigation structures, including a flood control levee, to protect maintenance facilities being built in a flood zone. At the time, there were no adequate permanent maintenance facilities for units stationed at the base, and the only available land big enough to support the proposed maintenance facilities was located within a flood zone. Most Installations Have Not Used Climate Projections in Designing Individual Facilities Projects Despite limited efforts to increase the resilience of facilities to extreme weather and climate change effects, officials from 17 of the military installations in our sample said that their individual facilities project designs generally did not consider climate projections. Of the installations that stated that they considered climate projections in facilities project designs, one military installation said it uses a study on sea level rise at the installation as a tool that incorporates forward-looking projections, and another installation said it uses a NOAA web-based tool, Sea Level Rise Viewer, for graphical representations of projected sea level rise. One installation noted that it had considered sea level rise projections in a pier design, which we discuss further below. A fourth installation said it plans to use a draft Navy study on the vulnerability of coastal Navy installations to sea level rise to inform an upcoming facilities project design. However, another installation said it has used energy consumption projections, which are not climate projections, and another installation cited a Navy climate adaptation handbook, which does not include climate projections for individual Navy installations. Moreover, over the course of our review of 23 installations, we were able to identify only one project as having a design informed by climate projections. Specifically, in 2018, officials from Naval Base San Diego, California, designed a project to demolish and replace an existing pier. The project’s design was informed by the expectation of sea level rise over the 75-year lifespan of the pier. An installation official told us that the consideration of rising sea levels was not part of the original project proposal, but when a contractor provided the sea level rise projections, installation officials decided to raise the pier by one foot. Figure 7 depicts a notional example of a pier—not specific to San Diego or any other particular location—raised to account for sea level rise. The Unified Facilities Criteria on piers and wharves states that the bottom elevation of the deck slab should be kept at least one foot above the extreme high water level. In this notional example, the pier is raised to account for an anticipated one-foot sea level rise, so that the bottom of the deck slab remains one foot above the extreme high water level, as shown in the figure. DOD guidance requires the military departments to assess and manage risks to both built and natural infrastructure, including making changes, as appropriate, to design and construction standards. The guidance also requires the military departments to leverage authoritative environmental prediction sources for appropriate data and analysis products to assess weather and climate effects. However, DOD’s Unified Facilities Criteria pertaining to project design, with the exception of the standard on high performance and sustainable building requirements, do not require consideration of climate projections as part of facilities project designs. The Unified Facilities Criteria standard on high performance and sustainable building requirements requires engineers to provide building design solutions that are responsive to any government-provided projections of climate change and determination of acceptable risk. We analyzed 27 core Unified Facilities Criteria, as well as 3 other Unified Facilities Criteria, Installation Master Planning, Design: Engineering Weather Data, DOD Building Code (General Building Requirements), and one facility criteria standard on Navy and Marine Corps Design Procedures. Our analysis showed that as of March 2019 these criteria, other than the Unified Facilities Criteria standard on installation master planning, do not identify authoritative sources of climate projections for use in facilities project designs. The Unified Facilities Criteria standard on installation master planning states that climate projections from the U.S. Global Change Research Program and the National Climate Assessment as well as the National Academy of Sciences shall be considered and incorporated into military construction designs and modifications. However, an official in the Office of the Assistant Secretary of Defense for Sustainment acknowledged that this requirement in the standard on installation master planning is not sufficient on its own to apply to all facility project designs. Additionally, the standard on installation master planning does not identify the specific types of climate projections to use or how to locate them. Our analysis showed that the Unified Facilities Criteria do not provide guidance on how to incorporate projections into facilities project designs, such as how to use projections involving multiple future scenarios and what future time periods to consider. We found that while some Unified Facilities Criteria direct project designers to climate data, these are historical climate data rather than projections. For example, the following standards do not direct project designers to sources of climate projections: 2015) (change 1, Feb. 1, 2016). This guidance directs project designers to use long-term rainfall records, such as those from regional weather stations, and directs engineers toward a table that provides rainfall data for selected locations. However, information included in the guidance is historical and does not include or refer to projections. Unified Facilities Criteria 3-400-02, Design: Engineering Weather Data (Sept. 20, 2018). This guidance directs project designers toward instructions for accessing climate data for use in designing facilities and in mission planning. However, the guidance does not discuss the use of or specifically reference climate projections. Unified Facilities Criteria 3-201-01, Civil Engineering (Apr. 1, 2018) (change 1, Mar. 19, 2019). This guidance requires project designers to plan for flood hazard areas and, if the project is constructed within the 100-year floodplain, requires that the project design document include flood mitigation measures as part of the project’s scope of work. However, the guidance does not include or reference projections that would help engineers design for various potential flooding scenarios. As previously noted, in response to a statutory requirement, DOD updated its Unified Facilities Criteria on master planning in November 2018 to require installations to consider and incorporate reliable and authorized sources of data on changing environmental conditions. However, simply including this language does not provide guidance to installations on what sources of climate projections to consider and how to use them in designing facilities projects, such as what future time periods to consider and how to incorporate projections involving multiple future scenarios. In addition, the Unified Facilities Criteria standard on master planning provides requirements and guidance for installation master planning but not for the design of individual facilities projects. An official of the Office of the Assistant Secretary of Defense for Sustainment stated that his office plans to develop a policy on the use of sea level rise projections by some time in 2019 and eventually to incorporate guidance on how to use sea level rise projections into the Unified Facilities Criteria or other guidance. This official added that there is currently no defined DOD process for vetting authoritative sources of climate projections, but that DOD plans to continue vetting sources for possible use, as appropriate. Furthermore, officials of 10 of the 23 military installations we reviewed stated that in order to incorporate such projections into project designs, they would need additional guidance from DOD or their military departments identifying authoritative sources of such projections or how to use climate projections that involve multiple future scenarios and different time periods. Ultimately, installations that do not consider climate projections in the design of their facilities projects may be investing in facilities projects without considering potential risks, such as potential future damage and degradation, which are associated with additional costs and reductions in capability. If DOD does not provide guidance on the use of climate projections in facilities designs, including what sources of climate projections to use, how to use projections involving multiple future scenarios, and what future time periods to consider, installation project designers will continue to lack direction on how to use climate projections. Further, if DOD does not update the Unified Facilities Criteria to require installations to consider climate projections in project designs and incorporate the department’s guidance on how to use climate projections in project designs, installation project designers may continue to exclude consideration of climate projections from facilities project designs. Considering climate projections in facilities projects would help DOD to reduce the climate-related risks to its facilities investments. Conclusions DOD has a global real estate portfolio that supports the department’s global workforce and its readiness to execute its national security missions. The department has repeatedly acknowledged the threats of extreme weather and climate change effects to its installations, and as we have previously reported, has begun taking steps to increase the resilience of its infrastructure to these threats. We found that 15 of the 23 the installations we visited or contacted had considered some type of extreme weather or climate change effects in their plans, a positive step toward increasing resilience to these climate risks. However, not all had done so and most of the installations we visited or contacted did not fully assess the risks associated with extreme weather and climate change effects—including the likelihood of the threat, potential effects on the installation, and possible responses to mitigate such effects. Likewise, many of the installations did not consider climate projections in planning. Without fully assessing the risks of extreme weather and climate change effects, and without considering climate projections as part of the planning process, installations may make planning decisions that do not fully anticipate future climate conditions. By seeking to anticipate future climate conditions, DOD may be able to reduce climate-related risks to its facilities and the corresponding budgetary risks. Eleven of the 23 installations we visited or contacted had designed or implemented one or more construction projects that incorporated resilience to extreme weather or climate change effects. These projects illustrate some of the steps that can be taken to increase an installation’s resilience to climate risks. However, most of the installations had not considered climate projections in project design. Considering climate projections in facilities projects would help DOD to reduce the climate- related risks to its facilities investments. By updating its facilities project design standards to require installations to consider climate projections in project designs, identifying authoritative sources of climate projections, and providing guidance on how to use climate projections, DOD can aid installations to better position themselves to be resilient to the risks of extreme weather and climate change effects. Recommendations for Executive Action We are making eight recommendations, including two to DOD and two to each of the military departments. Specifically, The Secretary of the Army should ensure that the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers works with the Assistant Secretary of Defense for Sustainment; the Chief of Civil Engineers and Commander, Naval Facilities Engineering Command; and the Director of the Air Force Civil Engineer Center to update the Unified Facilities Criteria standard on installation master planning to require that master plans include (1) an assessment of the risks from extreme weather and climate change effects that are specific to the installation and (2) plans to address those risks as appropriate. (Recommendation 1) The Secretary of the Navy should ensure that the Chief of Civil Engineers and Commander, Naval Facilities Engineering Command works with the Assistant Secretary of Defense for Sustainment, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and the Director of the Air Force Civil Engineer Center to update the Unified Facilities Criteria standard on installation master planning to require that master plans include (1) an assessment of the risks from extreme weather and climate change effects that are specific to the installation and (2) plans to address those risks as appropriate. (Recommendation 2) The Secretary of the Air Force should ensure that the Director of the Air Force Civil Engineer Center works with the Assistant Secretary of Defense for Sustainment; the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers; and the Chief of Civil Engineers and Commander, Naval Facilities Engineering Command to update the Unified Facilities Criteria standard on installation master planning to require that master plans include (1) an assessment of the risks from extreme weather and climate change effects that are specific to the installation and (2) plans to address those risks as appropriate. (Recommendation 3) The Secretary of Defense should issue guidance on incorporating climate projections into installation master planning, including—at a minimum— what sources of climate projections to use, how to use projections involving multiple future scenarios, and what future time periods to consider. (Recommendation 4) The Secretary of Defense should issue guidance on incorporating climate projections into facilities project designs, including—at a minimum—what sources of climate projections to use, how to use projections involving multiple future scenarios, and what future time periods to consider. (Recommendation 5) The Secretary of the Army should ensure that the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers works with the Assistant Secretary of Defense for Sustainment; the Chief of Civil Engineers and Commander, Naval Facilities Engineering Command; and the Director of the Air Force Civil Engineer Center to update relevant Unified Facilities Criteria to require that installations consider climate projections in designing facilities projects and incorporate, as appropriate, DOD guidance on the use of climate projections in facilities project designs—including identification of authoritative sources of such projections, use of projections involving multiple future scenarios, and what future time periods to consider. (Recommendation 6) The Secretary of the Navy should ensure that the Chief of Civil Engineers and Commander, Naval Facilities Engineering Command works with the Assistant Secretary of Defense for Sustainment, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and the Director of the Air Force Civil Engineer Center to update relevant Unified Facilities Criteria to require that installations consider climate projections in designing facilities projects and incorporate, as appropriate, DOD guidance on the use of climate projections in facilities project designs— including identification of authoritative sources of such projections, use of projections involving multiple future scenarios, and what future time periods to consider. (Recommendation 7) The Secretary of the Air Force should ensure that the Director of the Air Force Civil Engineer Center works with the Assistant Secretary of Defense for Sustainment; the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers; and the Chief of Civil Engineers and Commander, Naval Facilities Engineering Command to update relevant Unified Facilities Criteria to require that installations consider climate projections in designing facilities projects and incorporate, as appropriate, DOD guidance on the use of climate projections in facilities project designs—including identification of authoritative sources of such projections, use of projections involving multiple future scenarios, and what future time periods to consider. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to DOD and NOAA. In written comments, DOD concurred with all eight of our recommendations and identified actions it plans to take to address two of them. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated as appropriate. NOAA did not provide any comments on the draft. We are sending copies of this report to the appropriate congressional addressees; the Secretary of Defense; the Secretaries of the Departments of the Army, Navy, and Air Force; and the Secretary of Commerce (for NOAA). In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Diana Maurer at (202) 512-9627 or at maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Scope and Methodology Senate Report 115-130, accompanying a bill for fiscal year 2018 appropriations for military construction, the Department of Veterans Affairs, and related agencies, cited concerns with the frequency and costs of extreme weather events and the potential effects of climate change, and included a provision for us to review the Department of Defense’s (DOD) progress in developing a means to account for potentially damaging weather in its project designs. In response to this provision, we examined the extent to which DOD has taken steps to incorporate resilience to extreme weather and climate change effects into (1) installation master plans and related planning documents, and (2) individual installation facilities projects. For both of our objectives, we visited or requested information from a sample of domestic military installations. We focused on domestic installations because our November 2017 report focused on foreign installations. To develop this sample, we selected installations in the continental United States, Alaska, Hawaii, and U.S. territories that had identified one or more climate-related vulnerabilities, based on their past experiences, in a DOD-administered survey of climate vulnerabilities, or installations that were referenced in a prior GAO report on weather and climate risks at DOD installations. In addition to these criteria, we selected sites that represented both a diversity in types of climate vulnerabilities and geographic diversity among the military services, as well as installations involved in any climate change-related pilot studies. From these criteria, we developed a non-generalizable sample of 23 installations. We also included in the sample one Air Force unit (not an installation) with responsibilities for particular facilities of interest in Alaska, because these facilities presented a climatic vulnerability (accelerating coastal erosion) that was not necessarily included elsewhere in the sample. We visited 10 of these installations, as well as the Air Force unit in Alaska, in person. Within the sample, we selected installations to visit based on geographic diversity and installations in proximity to each other, allowing us to visit multiple installations on each trip. For the remaining 13 installations, we developed and administered a questionnaire and document request. We received responses from 12 of these installations. One installation—Camp Lejeune—sustained significant damage from Hurricane Florence in September 2018, and to minimize the burden on installation officials’ time to respond, we met with them by phone. Results from our nongeneralizable sample cannot be used to make inferences about all DOD locations. However, the information from these installations provides valuable insights. We asked similar questions to installations on our site visits and in the questionnaires, and we collected similar documents—such as installation master plans and individual facilities project documents— allowing us to report on similar information, such as the extent to which extreme weather and climate change considerations were integrated into installation master plans and individual facilities projects. For objective one, we reviewed DOD policies, guidance, and standards related to increasing climate resilience and conducting installation master planning. These documents included, among others, DOD Directive 4715.21, which establishes policy and assigns responsibilities for DOD to assess and manage risks associated with climate change; DOD’s Unified Facilities Criteria standard on installation master planning, which establishes the requirements for installation master plans; and a memorandum from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics on floodplain management on DOD installations. We interviewed officials in the Office of the Assistant Secretary of Defense for Sustainment and the Strategic Environmental Research and Development Program. We also interviewed officials in each of the military departments, including officials involved with installation policy, as well as officials from the engineering organizations of each military department and officials in the National Oceanic and Atmospheric Administration to discuss climate science and the data potentially available for planners to use. We reviewed documents from each of the 23 installations and the one Air Force unit in our sample, including master plans, and used interviews with installation officials and questionnaires received from installations to determine the extent to which the installations had incorporated consideration of extreme weather and climate change effects into their installation plans. We compared DOD’s actions to take steps in installation planning to increase resilience to extreme weather and climate change effects with DOD guidance on climate change adaptation and resilience, Unified Facilities Criteria standards, federal internal control standards, and best practices for enterprise risk management. For objective two, we reviewed DOD guidance, including DOD Directive 4715.21, requiring DOD components to integrate climate change considerations into DOD plans. We also reviewed DOD’s facilities project design standards—the Unified Facilities Criteria—to determine the extent to which installations incorporated requirements for climate resilience and to identify any required or recommended climate data sources for facilities project design. Specifically, we reviewed the 27 core Unified Facilities Criteria standards, as well as 3 other Unified Facilities Criteria standards outside of the core 27—because of their broad relevance to project design—and one facility criteria on Navy and Marine Corps design procedures. Additionally, we performed a content analysis of these criteria for references to climate, weather, environment, and any climate data to be used as a basis for facilities design. We also identified any required or recommended climate data sources or tools for facilities design by searching for references, web links, or tables related to climate data within the criteria. Where climate data sources were identified, we reviewed them to determine the extent to which the sources and tools involved historical data or climate projections that anticipate future climate conditions. We interviewed officials from the U.S. Army Corps of Engineers, Naval Facilities Engineering Command, and the Air Force Civil Engineer Center to understand the extent to which the Unified Facilities Criteria include guidance or data sources for adapting DOD facilities to extreme weather and climate change effects. In addition, we used interviews with installation officials and questionnaires we received from installations to determine the extent to which the installations had planned or executed any military construction or sustainment, restoration, and modernization facilities projects since 2013 that included any elements for building resilience to extreme weather or climate change effects. We then reviewed project documentation for proposed or approved facilities projects to identify the resilience measures taken. We also observed some facilities-related climate resilience measures adopted by these installations. In addition, we interviewed officials from the Office of the Assistant Secretary of Defense for Sustainment to determine what plans, if any, the office had to update Unified Facilities Criteria with climate resilience requirements. We also interviewed officials from the Office of the Assistant Secretary of the Army for Installations, Energy and Environment; the Office of the Assistant Secretary of the Navy for Energy, Installations and Environment; and the Office of the Assistant Secretary of the Air Force, Installations, Environment and Energy to identify any actions, policies, or processes related to adapting facilities to extreme weather and climate change effects. Moreover, we interviewed officials from the American Society of Civil Engineers to understand what efforts, if any, had been made to incorporate climate projections into industry standards. Finally, we compared the extent to which DOD took steps in its facilities projects and its project design standards to increase resilience with DOD guidance on climate change resilience. Table 3 lists the locations we visited or contacted during this review, including the installations receiving our questionnaire. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Diana Maurer at (202) 512-9627 or maurerd@gao.gov. Staff Acknowledgments In addition to the contact named above, Brian J. Lepore (Director, retired), Kristy Williams (Assistant Director), Michael Armes, Kendall Childers, Simon Hirschfeld, Joanne Landesman, Amie Lesser, Grace Meany, Shahrzad Nikoo, Samantha Piercy, Monica Savoy, Benjamin Sclafani, Joseph Dean Thompson, and Jack Wang made key contributions to this report. Related GAO Products High-Risk Series: Substantial Efforts Needed to Achieve Greater Progress on High-Risk Areas, GAO-19-157SP. Washington, D.C.: March 6, 2019. Climate Change: Analysis of Reported Federal Funding. GAO-18-223. Washington, D.C.: April 30, 2018. Climate Change Adaptation: DOD Needs to Better Incorporate Adaptation into Planning and Collaboration at Overseas Installations. GAO-18-206. Washington, D.C.: November 13, 2017. Climate Change: Information on Potential Economic Effects Could Help Guide Federal Efforts to Reduce Fiscal Exposure. GAO-17-720. Washington, D.C.: September 28, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Climate Change: Improved Federal Coordination Could Facilitate Use of Forward-Looking Climate Information in Design Standards, Building Codes, and Certifications. GAO-17-3. Washington, D.C.: November 30, 2016. Defense Infrastructure: DOD Efforts to Prevent and Mitigate Encroachment at Its Installations. GAO-17-86. Washington, D.C.: November 14, 2016. Climate Information: A National System Could Help Federal, State, Local, and Private Sector Decision Makers Use Climate Information. GAO-16-37. Washington, D.C.: November 23, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Budget Issues: Opportunities to Reduce Federal Fiscal Exposures Through Greater Resilience to Climate Change and Extreme Weather. GAO-14-504T. Washington, D.C.: July 29, 2014. Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. Climate Change: Energy Infrastructure Risks and Adaptation Efforts. GAO-14-74. Washington, D.C.: January 31, 2014. Climate Change: Federal Efforts Under Way to Assess Water Infrastructure Vulnerabilities and Address Adaptation Challenges. GAO-14-23. Washington, D.C.: November 14, 2013. Climate Change: State Should Further Improve Its Reporting on Financial Support to Developing Countries to Meet Future Requirements and Guidelines. GAO-13-829. Washington, D.C.: September 19, 2013. Climate Change: Various Adaptation Efforts Are Under Way at Key Natural Resource Management Agencies. GAO-13-253. Washington, D.C.: May 31, 2013. Climate Change: Future Federal Adaptation Efforts Could Better Support Local Infrastructure Decision Makers. GAO-13-242. Washington, D.C.: April 12, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. International Climate Change Assessments: Federal Agencies Should Improve Reporting and Oversight of U.S. Funding. GAO-12-43. Washington, D.C.: November 17, 2011. Climate Change Adaptation: Federal Efforts to Provide Information Could Help Government Decision Making. GAO-12-238T. Washington, D.C.: November 16, 2011.
Why GAO Did This Study DOD manages a global real-estate portfolio with an almost $1.2 trillion estimated replacement value. Since 2010, DOD has identified climate change as a threat to its operations and installations. In January 2019, DOD stated that the effects of a changing climate are a national security issue with potential impacts to the department's missions, operational plans, and installations. GAO was asked to assess DOD's progress in developing a means to account for potentially damaging weather in its facilities project designs. GAO examined the extent to which DOD has taken steps to incorporate resilience to extreme weather and climate change effects into (1) selected installation master plans and related planning documents, and (2) selected individual installation facilities projects. GAO reviewed DOD documents related to increasing climate resilience, conducting installation master planning, and designing facilities projects. GAO visited or contacted a non-generalizable sample of 23 installations that had been associated with one or more climate vulnerabilities. What GAO Found Department of Defense (DOD) installations have not consistently assessed risks from extreme weather and climate change effects or consistently used projections to anticipate future climate conditions. For example, DOD's 2018 preliminary assessment of extreme weather and climate effects at installations was based on the installations' reported past experiences with extreme weather rather than an analysis of future vulnerabilities based on climate projections. Fifteen of the 23 installations GAO visited or contacted had considered some extreme weather and climate change effects in their plans as required by DOD guidance, but 8 had not. For example, Fort Irwin, California, worked with the U.S. Army Corps of Engineers to improve stormwater drainage after intense flash flooding caused significant damage to base infrastructure. By contrast, Joint Base Pearl Harbor-Hickam, Hawaii, did not include such considerations in its plans, although it is located in an area subject to tropical storms and where further sea level rise is anticipated. GAO also found that most of the installations had not used climate projections, because they lack guidance on how to incorporate projections into their master plans. Not assessing risks or using climate projections in installation planning may expose DOD facilities to greater-than-anticipated damage or degradation as a result of extreme weather or climate-related effects. Eleven of the 23 installations we reviewed had designed one or more individual facilities projects to increase the resilience of the facilities to extreme weather and climate change effects. However, project designs generally did not consider climate projections, according to installation officials. These officials told us that DOD lacks guidance on how to use climate projections that involve multiple future scenarios and different time periods. Until DOD updates its facilities design standards to require installations to consider climate projections in project designs, identify authoritative sources for them to use, and provide guidance on how to use projections, installation project designers may continue to exclude consideration of climate projections from facilities project designs, potentially making investments that are planned without consideration of climate-related risks. What GAO Recommends GAO is making eight recommendations, including that the military departments work together to update master planning criteria to require an assessment of extreme weather and climate change risks and to incorporate DOD guidance on the use of climate projections into facilities design standards. GAO also recommends that DOD issue guidance on incorporating climate projections into installation master planning and facilities project designs. DOD concurred with all eight of GAO's recommendations.
gao_GAO-20-259T
gao_GAO-20-259T_0
Improved CMS Oversight Is Needed to Better Protect Residents from Abuse In our June 2019 report, we found that, while abuse deficiencies cited in nursing homes were relatively rare from 2013 through 2017, they became more frequent during that time, with the largest increase in severe cases. Specifically, abuse deficiencies comprised less than 1 percent of the total deficiencies in each of the years we examined, which is likely conservative. Abuse in nursing homes is often underreported by residents, family, staff, and the state survey agency, according to CMS officials and stakeholders we interviewed. However, abuse deficiencies more than doubled—from 430 in 2013 to 875 in 2017—over the 5-year period. (See appendix II.) In addition, abuse deficiencies cited in 2017 were more likely to be categorized at the highest levels of severity— deficiencies causing actual harm to residents or putting residents in immediate jeopardy—than they were in 2013. In light of the increased number and severity of abuse deficiencies, it is imperative that CMS have strong nursing home oversight in place to protect residents from abuse; however, we found oversight gaps that may limit the agency’s ability to do so. Specifically, we found that CMS: (1) cannot readily access data on the type of abuse or type of perpetrator, (2) has not provided guidance on what information nursing homes should include in facility-reported incidents, and (3) has numerous gaps in its referral process that can result in delayed and missed referrals to law enforcement. Information on Abuse and Perpetrator Types Is Not Readily Available In our June 2019 report, we found that CMS’s data do not allow for the type of abuse or perpetrator to be readily identified by the agency. Specifically, CMS does not require the state survey agencies to record abuse and perpetrator type and, when this information is recorded, it cannot be easily analyzed by CMS. Therefore, we reviewed a representative sample of 400 CMS narrative descriptions—written by state surveyors—associated with abuse deficiencies cited in 2016 and 2017 to identify the most common types of abuse and perpetrators. From this review, we found that physical abuse (46 percent) and mental/verbal abuse (44 percent) occurred most often in nursing homes, followed by sexual abuse (18 percent). Furthermore, staff, which includes those working in any part of the nursing home, were more often the perpetrators (58 percent) of abuse in deficiency narratives, followed by resident perpetrators (30 percent) and other types of perpetrators (2 percent). (See appendix III for examples from our abuse deficiency narrative review.) CMS officials told us they have not conducted a systematic review to gather information on abuse and perpetrator type. Further, based on professional experience, literature, and ad hoc analyses of deficiency narrative descriptions, CMS officials told us they believe the majority of abuse is committed by nursing home residents and that physical and sexual abuse were the most common types. This understanding does not align with our findings on the most common types of abuse and perpetrators represented in CMS’s data on deficiencies cited as abuse. Without the systematic collection and monitoring of specific abuse and perpetrator data, CMS lacks key information and, therefore, cannot take actions—such as tailoring prevention and investigation activities—to address the most prevalent types of abuse or perpetrators. To address this, we recommended that CMS require state survey agencies to report abuse and perpetrator type in CMS’s databases for deficiency, complaint, and facility-reported incident data and that CMS systematically assess trends in these data. HHS concurred with our recommendation and stated that it plans to implement changes in response. As of November 2019, HHS had not implemented the recommendation. Facility-Reported Incidents Lack Key Information Despite federal law requiring nursing homes to self-report allegations of abuse and covered individuals to report reasonable suspicions of crimes against residents, in June 2019 we reported that CMS had not provided guidance to nursing homes on what information they should include in facility-reported incidents, contributing to a lack of information for state survey agencies and delays in their investigations. Specifically, officials from each of the five state survey agencies told us that the documentation they receive from nursing homes for facility-reported incidents can lack key information that affects their ability to triage incidents and determine whether an investigation should occur and, if so, how soon. For example, officials from two state survey agencies we interviewed said they sometimes have to conduct significant follow-up with the nursing homes to obtain the information they need to prioritize the incident for investigation—follow-up that delays and potentially negatively affects investigations. Incomplete incident reports from nursing homes are particularly problematic given that nearly half of abuse deficiencies cited between 2013 and 2017 were identified through facility-reported incidents, which is dramatically different than the approximately 5 percent of all types of deficiencies that were identified in this manner. Therefore, facility-reported incidents play a unique and significant role in identifying abuse deficiencies in nursing homes, making it critical that incident reports provided by nursing homes include the information necessary for state survey agencies to prioritize and investigate. To address this issue, we recommended that CMS develop and disseminate guidance— including a standardized form—to all state survey agencies on the information nursing homes and covered individuals should include on facility-reported incidents. HHS concurred with our recommendation and stated that it plans to implement changes in response. As of November 2019, HHS had not implemented the recommendation. Gaps Exist in CMS Process for State Survey Agency Referrals to Law Enforcement and MFCUs In June 2019, we identified gaps in CMS’s process for referring incidents of abuse to law enforcement and, if appropriate, to MFCUs. These gaps may limit CMS’s ability to ensure that nursing homes meet federal requirements for residents to be free from abuse. Specifically, we identified issues related to (1) referring abuse to law enforcement in a timely manner, (2) tracking abuse referrals, (3) defining what it means to substantiate an allegation of abuse—that is, the determination by the state survey agency that evidence supports the abuse allegation, and (4) sharing information with law enforcement. We made recommendations that CMS address each of these four gaps in the referral process, and HHS concurred with each recommendation and stated that it plans to implement changes in response. As of November 2019, HHS had not implemented these recommendations. One of the gaps in CMS’s process is related to referring abuse to law enforcement in a timely manner. For example, law enforcement investigations can be significantly delayed because CMS requires a state survey agency to make referrals to law enforcement only after abuse is substantiated—a process that can often take weeks or months. Officials from one law enforcement agency and two MFCUs we interviewed told us the delay in receiving referrals limits their ability to collect evidence and prosecute cases—for example, bedding associated with potential sexual abuse may have been washed, and a victim’s wounds may have healed. As such, we recommended that CMS require state survey agencies to immediately refer to law enforcement any reasonable suspicion of a crime against a resident. HHS concurred with our recommendation and stated that it plans to implement changes in response. As of November 2019, HHS had not implemented this recommendation. In conclusion, while nursing home abuse is relatively rare, our June 2019 report shows that abuse deficiencies cited in nursing homes are becoming more frequent, with the largest increase in severe cases. It is imperative that CMS have more complete and readily available information on abuse to improve its oversight of nursing homes. It is also essential that CMS require state survey agencies to immediately report incidents to law enforcement if they have a reasonable suspicion that a crime against a resident has occurred in order to ensure a prompt investigation of these incidents. Chairman Neal, Ranking Member Brady, and Members of the Committee, this concludes GAO’s statement for the record. GAO Contact and Staff Acknowledgments For further information about this statement, please contact John E. Dicken at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, key contributors to this statement were Karin Wallestad (Assistant Director), Sarah-Lynn McGrath (Analyst-in-Charge) and Summar C. Corley. Also contributing to the underlying report for this statement were Luke Baron, Julianne Flowers, Laurie Pachter, Vikki Porter, Kathryn Richter, and Jennifer Whitworth. Appendix I: Summary of GAO Reports on the Health and Welfare of the Elderly We have issued a number of reports reviewing the health and welfare of the elderly in multiple settings. For example, since January 2015, we have issued reports on the incidence of abuse in nursing homes and what is known about the incidence of abuse in assisted living facilities. Reports often included key recommendations. (See table 1.) Appendix II: Severity of Abuse Deficiencies Cited in Nursing Homes, 2013 through 2017 Appendix II: Severity of Abuse Deficiencies Cited in Nursing Homes, 2013 through 2017 CMS restructured its deficiency code system beginning on November 28, 2017. Due to these coding changes, we did not analyze CMS data cited by surveyors after the implementation of that change. Percentages may not add to 100 due to rounding. Appendix III: Examples from a Representative Sample of Nursing Home Abuse Deficiency Narratives, 2016-2017 Related GAO Reports Elder Abuse: Federal Requirements for Oversight in Nursing Homes and Assisted Living Facilities Differ. GAO-19-599. Washington, D.C.: August 19, 2019. Nursing Homes: Improved Oversight Needed to Better Protect Residents from Abuse. GAO-19-433. Washington, D.C.: June 13, 2019. Elder Justice: Goals and Outcome Measures Would Provide DOJ with Clear Direction and a Means to Assess Its Efforts. GAO-19-365. Washington, D.C.: June 7, 2019. Management Report: CMS Needs to Address Gaps in Federal Oversight of Nursing Home Abuse Investigations That Persisted in Oregon for at Least 15 Years. GAO-19-313R. Washington, D.C.: April 15, 2019. Medicaid Assisted Living Services: Improved Federal Oversight of Beneficiary Health and Welfare is Needed. GAO-18-179. Washington, D.C.: January 5, 2018. Medicaid Managed Care: CMS Should Improve Oversight of Access and Quality in States’ Long-Term Services and Supports Programs. GAO-17- 632. Washington, D.C.: August 14, 2017. Medicaid Personal Care Services: CMS Could Do More to Harmonize Requirements across Programs. GAO-17-28. Washington, D.C.: November 23, 2016. Nursing Homes: Consumers Could Benefit from Improvements to the Nursing Home Compare Website and Five-Star Quality Rating System. GAO-17-61. Washington, D.C.: November 18, 2016. Elder Abuse: The Extent of Abuse by Guardians is Unknown, but Some Measures Exist to Help Protect Older Adults. GAO-17-33. Washington, D.C.: November 16, 2016. Skilled Nursing Facilities: CMS Should Improve Accessibility and Reliability of Expenditure Data. GAO-16-700. Washington, D.C.: September 7, 2016. Nursing Home Quality: CMS Should Continue to Improve Data and Oversight. GAO-16-33. Washington, D.C.: October 30, 2015. Antipsychotic Drug Use: HHS Has Initiatives to Reduce Use among Older Adults in Nursing Homes, but Should Expand Efforts to Other Settings. GAO-15-211. Washington, D.C.: January 30, 2015. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Nationwide, about 1.4 million elderly or disabled individuals receive care in more than 15,500 nursing homes. CMS, an agency within the Department of Health and Human Services (HHS), defines standards nursing homes must meet to participate in the Medicare and Medicaid programs. Nursing home residents often have physical or cognitive limitations that can leave them particularly vulnerable to abuse. Abuse of nursing home residents can occur in many forms—including physical, mental, verbal, and sexual—and can be committed by staff, residents, or others in the nursing home. Any incident of abuse is a serious occurrence and can result in potentially devastating consequences for residents, including lasting mental anguish, serious injury, or death. This statement summarizes GAO's June 2019 report, GAO-19-433 . Specifically, it describes: (1) the trends and types of abuse in recent years, and (2) CMS's oversight intended to ensure residents are free from abuse. It also includes a brief summary of findings and recommendations from this June 2019 report and prior GAO reports that examined the health and welfare of the elderly in multiple settings, and the status, as of November 2019, of HHS's efforts to implement the recommendations GAO made. What GAO Found The Centers for Medicare & Medicaid Services (CMS) is responsible for ensuring nursing homes meet federal quality standards, including that residents are free from abuse. CMS enters into agreements with state survey agencies to conduct surveys of the state's homes and to investigate complaints and incidents. GAO's June 2019 report found that, while abuse deficiencies cited in nursing homes were relatively rare from 2013 through 2017, they more than doubled during that time, increasing from 430 in 2013 to 875 in 2017, with the largest increase in severe cases. In light of the increased number and severity of abuse deficiencies, it is imperative that CMS have strong nursing home oversight in place to protect residents from abuse. However, GAO found oversight gaps that may limit the agency's ability to do so. Specifically, GAO found: (1) Information on abuse and perpetrator types is not readily available. CMS's data do not allow for the type of abuse or perpetrator to be readily identified by the agency. Specifically, CMS does not require the state survey agencies to record abuse and perpetrator type and, when this information is recorded, it cannot be easily analyzed by CMS. GAO made a recommendation that CMS require state survey agencies to submit data on abuse and perpetrator type and HHS concurred. As of November 2019, HHS had not implemented the recommendation. (2) Facility-reported incidents lack key information. Despite federal law requiring nursing homes to self-report allegations of abuse and covered individuals to report reasonable suspicions of crimes against residents, CMS has not provided guidance to nursing homes on what information they should include in facility-reported incidents, contributing to a lack of information for state survey agencies and delays in their investigations. GAO made a recommendation that CMS develop guidance on what abuse information nursing homes should self-report and HHS concurred. As of November 2019, HHS had not implemented the recommendation. (3) Gaps exist in the CMS process for state survey agency referrals to law enforcement. GAO found gaps in CMS's process for referring incidents of abuse to law enforcement. These gaps may limit CMS's ability to ensure that nursing homes meet federal requirements for residents to be free from abuse. Specifically, GAO identified issues related to (1) referring abuse to law enforcement in a timely manner, (2) tracking abuse referrals, (3) defining what it means to substantiate an allegation of abuse—that is, the determination by the state survey agency that evidence supports the abuse allegation, and (4) sharing information with law enforcement. GAO made four recommendations to address these gaps and HHS concurred. As of November 2019, HHS had not implemented these recommendations.
gao_GAO-19-362
gao_GAO-19-362_0
Background CYBERCOM’s Cyber Mission Force In 2012, DOD developed plans to establish 133 CMF teams focused on offensive operations, defensive operations, and DOD network protection. DOD provided budget resources for these teams beginning in fiscal year 2014. It subsequently set goals for reaching initial operational capability and full operational capability. Later in this report we describe how some of the methods used to facilitate these teams’ achievement of full operational capability subsequently affected readiness. Once each CMF team has achieved full operational capability, it is required to certify to its mission at least every 2 years. According to CYBERCOM’s 2017 readiness guidance, in order for each CMF team to achieve the best readiness rating it must certify to its mission every 12 months. According to the DOD Cyber Strategy published in 2015, the first wave of CMF teams will include nearly 6,200 military, civilian, and contractor support personnel from across the military departments and defense components, when they are fully staffed. In February 2017, the commander of CYBERCOM endorsed an Army proposal to present its 21 Reserve component Cyber Protection Teams (11 Army National Guard and 10 Army Reserve) for assignment to U.S. Strategic Command to help address increased mission requirements. These 21 teams represent a second wave of teams, which CYBERCOM has scheduled to achieve full operational capability by September 30, 2024. The second wave of 21 Army Reserve component teams are to include more than 800 personnel once they are fully staffed. The CMF teams are aligned with various DOD organizations, as shown in figure 1. The military service cyber components—Army Cyber Command, Fleet Cyber Command, Marine Corps Forces Cyberspace, and Air Forces Cyber—are CYBERCOM’s service elements and support CYBERCOM in achieving its missions. The personnel on each team represent a variety of specialties, such as intelligence analysts, linguists, and cyber operators and specialists. Figure 2 provides a hypothetical example of how each team might combine personnel from different specialties to carry out its missions. This figure does not show the actual composition of any type of team, but rather provides notional examples of how each team consists of personnel from different specialties who unite to perform cyber missions as part of the CMF. The Four Phases of CMF Training Training personnel for the CMF occurs in four phases and is administered by different entities, as shown in figure 3. Phase one basic training is the initial training performed by the military services that is delivered to any new recruit so that he or she may be assigned a military specialty. As shown in figure 2, CMF personnel draw from a number of different military specialties, including cyber, all-source intelligence, signals intelligence, information technology, and language specialists. Phase one basic training is not necessarily cyber-specific, as it is meant to provide military personnel with the basic skills needed to perform a particular occupation for the service. For example, CMF teams include intelligence professionals who may be assigned to analyze intelligence information that comes from a variety of sources. Training in phases two (foundational), three (collective), and four (sustainment) are focused more directly on the specific skills required to function as a member of the various CMF teams. Key Roles and Responsibilities for Training the CMF To establish and train the CMF teams, DOD has assigned components and senior officials with CMF training roles and responsibilities. The key responsibilities for training the CMF are summarized in table 1 below; a more inclusive list is presented in appendix I. DOD Has Taken Action to Develop a Trained Cyber Mission Force As part of the department’s efforts to develop and maintain trained CMF teams, CYBERCOM and the military services have implemented a number of initiatives. Specifically, CYBERCOM established consistent training standards, developed standard operating procedures for readiness reporting, and established and maintained a series of phase two foundational training courses. Further, CYBERCOM and the military services used existing training capabilities to build CMF teams. However, many of the teams that have been built are not yet fully trained and, according to agency officials, have “generally low” readiness levels. CYBERCOM and the Military Services Have Taken Actions to Train CMF Teams In 2012, CYBERCOM established consistent standards for CMF training phases within its responsibility, and the command has continuously updated those standards, as needed, to meet evolving requirements. Specifically, the command has established and updated the standards for phases two (foundational), three (collective), and four (sustainment) of CMF training. These standards apply to all military personnel regardless of service affiliation or active/reserve status. The standards are contained primarily in two documents. First, CYBERCOM issued and has regularly updated the Joint Cyberspace Training and Certification Standards (JCT&CS) to create standardized joint procedures, guidelines, and standards for individual staff and collective training, and to accurately assess CMF teams’ ability to perform their missions. This document was most recently revised in February 2018, to update, among other things, the tasks and abilities associated with CMF work roles based on feedback from experts within the military services and CYBERCOM. Second, CYBERCOM published the CMF Training and Readiness Manual to serve as the primary training and evaluation guidance for DOD cyber professionals. The CMF Training and Readiness Manual has been updated 13 times since it was originally issued in 2013, and it is CYBERCOM’s authoritative guide to building and maintaining cyber training and readiness for its personnel. It provides graduated levels of evaluated training that teams can use in preparing for certification and in being certified. Additionally, it identifies approved training events and the mission-essential tasks, associated standards, and key duties for members of CMF teams. The manual requires each team to recertify every 2 years, or upon recovery from a 50 percent or higher turnover of CMF team personnel. CYBERCOM Developed Standard Operating Procedures for Readiness Reporting In December 2017, CYBERCOM published standard operating procedures for readiness reporting that CMF teams are to use to assess whether they have the resources and capability to perform their missions. The procedures define CMF readiness reporting guidelines related to personnel, equipment, and training. For example, the document identifies three training metrics that evaluate (1) whether personnel are trained to job qualification standards; (2) whether CMF teams have successfully completed supporting tasks during training exercises, events, or real world operations; and (3) the length of time between formal evaluations. Specifically, the standard operating procedures emphasize that in order to obtain the best training readiness rating, teams must perform an evaluated event or operation at least once every 12 months. CYBERCOM Established and Maintained a Series of Courses for Individual Foundation Training CYBERCOM maintains and coordinates a series of CMF courses for phase two foundational training. It develops and administers these course requirements for all of the CMF work roles and requires personnel to complete courses specific to their job responsibilities. All CMF personnel filling a specific mission and role complete the same foundational courses, regardless of military service, employment status—active duty or reserve—or type of CMF team to which they are assigned. For example, all intelligence analysts on CMF teams are to complete the same 14 courses that are specific to their role on the team. CYBERCOM training directorate officials told us they had to make changes to the training progression over time to adapt to the changing threat environment. Accordingly, CYBERCOM has added, modified, or deleted phase two foundational training courses over the past 4 years. For example, in the past 4 years CYBERCOM consolidated four existing courses into a single introductory cyber course that is taken by all-source intelligence analysts who will be part of CMF teams. In November 2017, the command updated the phase two foundational training requirements by removing three courses that were required for a variety of Cyber Protection Team work roles. CYBERCOM also added a new networking course that is a pre-requisite to a course that comes later in the training progression for Cyber and National Mission Team mission commanders. The most recent update also emphasized that Cyber Protection Team personnel must complete the Intermediate Cyber Core Course, the Cyber Protection Team Core Course, and then their specific methodology courses, in that order. According to officials from the service cyber components, the changes CYBERCOM has made to its phase two foundational training progression have been transparent and have addressed evolving threats. However, the changes have also negatively affected training time frames, particularly for the CMF teams composed of National Guard and Reserve personnel. Because National Guard and Reserve teams are scheduled to achieve full operational capability after the active duty teams, they are more likely to be subject to the newer training progressions, which in some cases require a few additional days of courses. Officials from the National Guard told us that this additional training time is more difficult to schedule for National Guard and Reserve personnel because—unlike the active duty personnel who are available to train full time—National Guard and reservist personnel are available to train only one weekend per month and generally for 2 weeks of annual training. Additionally, most of these personnel must coordinate time off from their full-time jobs to take the required phase two foundational training courses. To help address these challenges, CYBERCOM officials told us they use mobile training teams. The Army Cyber School has also used mobile training teams to provide CMF training opportunities to Reserve personnel. The officials from CYBERCOM and the Army told us that the mobile training teams make training more accessible by avoiding the need for the National Guard and Reserve personnel to travel. CYBERCOM and the Services Used Existing Training Capabilities DOD has used existing training capabilities—including courses, instructors, and facilities—throughout all phases of CMF training. For example: Joint Cyber Analysis Course. The Navy’s Center for Information Warfare Training is the host for the Joint Cyber Analysis Course—a phase one basic training course for personnel designated for cryptologic roles. CYBERCOM recommends this course for many CMF work roles. Cyber and Cryptologic training institutions. CYBERCOM has partnered with the Defense Cyber Investigation Training Academy, the Defense Information Systems Agency, the National Security Agency, and military service schoolhouses to deliver phase two foundational training for the CMF. The Defense Cyber Investigation Training Academy offers almost all of the training courses needed by Cyber Protection Teams, and Army officials said they used the expertise and course materials provided by the Defense Cyber Investigation Training Academy to develop Cyber Protection Team training courses that they offer at the Army Cyber School as well. National Security Agency’s National Cryptologic School provides a majority of the other phase two foundational CMF training courses. According to officials from CYBERCOM and the National Cryptologic School, reliance on existing training capabilities and expertise from the National Security Agency enabled the command to quickly establish CMF capabilities. Operational events. CYBERCOM used both simulated and real-world operational events on networks to support the certification of CMF teams. For example, CYBERCOM officials told us that CYBER KNIGHT is a training event offered periodically by CYBERCOM for CMF teams to exercise national and non-national mission sets. CYBER FLAG and CYBER GUARD, also conducted by CYBERCOM on a periodic basis, utilize a dynamic joint cyber training environment and, according to CYBERCOM officials train all types of CMF teams. In addition to using simulated events through exercises, CYBERCOM and military service officials said that teams were allowed to use real- world operations to meet phase three collective training requirements. The military services and CYBERCOM plan to continue to use existing resources, such as the service school houses, for new and continuous training into the future. For example, as part of their training transition plan, Marine Corps officials reported that they have a contract in place with Navy’s Space and Naval Warfare Systems Command to provide additional training to Marine Corps CMF personnel after they complete the phase two foundational training progression. Additionally, the Army Cyber School, which provides CMF-specific training for the Army, currently trains Marine Corps personnel as well. The Army and Marine Corps have training agreements in place to continue this arrangement. Figure 4 below shows a member of the National Guard participating in a cyber training exercise. Certified Teams Are Not Fully Trained, But CYBERCOM Is Taking Actions to Improve Training and Readiness We found that many of the CMF teams for which DOD has reported achieving full operational capability actually require further training, for varying reasons. For example, officials from many key organizations across the DOD cyber enterprise told us that the services moved some personnel among teams, reducing the readiness for teams from which personnel were transferred. Officials from the Office of the Under Secretary of Defense for Personnel and Readiness, Joint Staff, and the military services cited other challenges affecting CMF team readiness levels as well, including the long time frames needed to obtain the appropriate clearances for CMF personnel and the high pace of operations for the teams, leaving little time for training. The same officials from across DOD’s cyber enterprise affirmed that, taken together, these actions and circumstances have had a negative effect on CMF team resource readiness levels. In April 2018, the commander of CYBERCOM acknowledged in testimony that “much works remains to be done to make the personnel proficient at their duties and the whole team ready and able to perform whatever missions might be directed.” The CMF teams were not fully trained and had lower readiness levels because CYBERCOM and the military services focused primarily on the teams’ achieving full operational capability by October 1, 2018, rather than on building operational readiness. Building operational readiness requires the teams to simultaneously have the appropriate number of sufficiently trained personnel across the force. According to the CMF Training Transition Plan, CYBERCOM’s senior leadership directed the command to achieve full operational capability, and it designated that effort as a higher priority than operational readiness. CYBERCOM officials told us that they recognized the low readiness of the CMF teams and have identified two actions to address the training deficiencies—and associated effects on readiness—for the CMF teams. First, according to the officials, CYBERCOM has developed a system that assigns unique identifiers to each person in the CMF and allows CYBERCOM to easily track when personnel move from one team to another. Second, in December 2017, CYBERCOM issued its readiness reporting standard operating procedure that establishes new readiness reporting guidelines. CYBERCOM officials stated that these guidelines emphasize readiness over the achievement of interim milestones, such as full operational capability. Given that CYBERCOM recently implemented these efforts to improve the readiness of the CMF teams, and that the quarterly readiness reports indicate improved resource readiness for personnel and training metrics, we are not making recommendations related to this issue. Through our body of work on defense cyber issues, we will continue to monitor DOD’s and CYBERCOM’s efforts to maintain a ready CMF. DOD Has Shifted Focus from Building to Maintaining a Trained CMF, but Has Not Taken Key Actions to Maintain Future Training DOD has taken steps to shift its focus from building a trained CMF to maintaining this force, but it has not taken key actions to ensure that the department is poised to maintain CMF training following this transition. Specifically, the military services have not developed plans that include time frames for validating all phase two foundational training courses, or that comprehensively assess their training requirements. Further, as of June 2018, CYBERCOM had not provided a plan for establishing independent assessors to evaluate and certify the completion of phase three collective training for CMF teams. DOD Is Shifting from Building to Maintaining a Trained CMF DOD officials told us that the department is shifting its focus away from building and toward maintaining a trained CMF. For example, the Army is leading the development of a Persistent Cyber Training Environment. The goal of that training environment is to provide on-demand access to scenarios that Army officials told us will enhance the quality, quantity, and standardization of phase three (collective) and phase four (sustainment) training and exercise events. The Persistent Cyber Training Environment is scheduled to provide some operational capability by 2019, and it is expected to continue to evolve to meet training needs. In addition to building a Persistent Cyber Training Environment, the department has developed the CMF Training Transition Plan, which will transfer administration of phase two foundational training from CYBERCOM to the services. Specifically, beginning in October 2018, the military services will assume responsibility for phase two foundational training of CMF personnel, which CYBERCOM has centrally managed since CMF training began in 2013. Officials from the services and CYBERCOM have held quarterly meetings to help guide the implementation of this plan. According to the CMF Training Transition Plan, the transfer is being made in response to a direction in Senate Report 114-49 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2016. The report directed the DOD Principal Cyber Advisor, the Commander, CYBERCOM, and the service secretaries to develop a plan for the military services to complete all required training for the second wave of CMF teams and to maintain individual training capabilities for the existing teams. In January 2017 the Joint Staff and Principal Cyber Advisor published the CMF Training Transition Plan, to transition CMF training to a model that complied with congressional committee direction. The principal goal of this approach is to drive efficiencies and reduce training development and delivery costs. According to the plan, CYBERCOM maintains control of the standards for phase two foundational training, while the Army, Navy, and Air Force are to assume specific joint curriculum lead roles. These roles entail developing joint training plans for the courses under the work roles they are assigned. In addition, the joint curriculum leads (i.e., Army, Navy, and Air Force) are responsible for identifying training gaps and developing learning objectives and courseware based on the CYBERCOM training task list requirements for each of the work roles. For example, under its curriculum lead role, the Army has accepted responsibility for the cyber planner courses. In carrying out this role, the Army developed the Cyber Operations Planners Course and submitted it to CYBERCOM to establish as an approved course for all cyber planners—regardless of service affiliation and of active or reserve duty status—in the CMF. Figure 5 shows the work role categories and responsibilities for which each military service has agreed to be curriculum lead. Military Services’ CMF Training Transition Implementation Plans Do Not Include Time Frames for Validating Courses or Comprehensive Assessments of Training Requirements In November 2017, CYBERCOM directed the military services to develop plans to implement their responsibilities in support of the CMF Training Transition Plan. In accordance with the training transition plan, the military services will assume responsibility for phase two foundational course validation as part of their joint curriculum lead duties. In February 2018, each of the four services provided a plan to CYBERCOM that, at a minimum, highlighted the efforts each service was taking to prepare for its new training transformation responsibilities, including phase two foundational course validation. The purpose of course validation is to determine whether a course adheres to CYBERCOM’s joint training standards as published in the Joint Cyberspace Training and Certification Standards (JCT&CS). CYBERCOM’s draft course validation guidance states that validation involves an examination of both the content of the courses, as well as the instructional methods. The manual states that the content should align with the knowledge, skills, and abilities for the appropriate CYBERCOM work roles and should meet the joint training standard. Further, the manual states that the validation of instructional methods examines how the course is taught and determines whether the methods are appropriate to support desired course outcomes. CYBERCOM’s draft course validation guidance lays out a series of requirements for the validation process, among which are the following: The military service that is submitting the course for validation is responsible for assembling course information, providing back-up data about the course, and securing subject matter experts to review the submission. The military service that is the joint curriculum lead for the course is responsible for reviewing the submissions and offering recommendations for modifications to courses to reflect joint standards. CYBERCOM is responsible for making final determinations of course validity. In this final review, CYBERCOM may hold discussions with key stakeholders, audit the course, review student feedback on the course, or review evaluation data from the course to inform its final validation determination. Our review of the services’ training transition plans found that the Army’s and Air Force’s plans address course validation to some degree, but they do not identify specific time frames for completing course validation. Specifically, the Army’s plan identifies the milestones, dates, and resources for the submission of two of its analyst and planner courses to CYBERCOM for validation, but it does not indicate when the service will submit its Cyber Protection Team Core Training Course for validation. The Air Force’s plan establishes a timeline for developing, finalizing, and distributing course validation guidance, but it does not have time frames or milestones indicating a time for beginning the process of submitting courses to CYBERCOM for validation. Standards for Internal Control in the Federal Government highlights the need to define objectives in specific terms, to include how objectives are to be achieved and time frames for their achievement. For example, the Navy’s plan indicates that the four courses for which it is responsible will be iteratively validated between fiscal years 2019 and 2021. While a 24- month time frame is broad and it may be challenging for CYBERCOM and the other services to know with precision when the Navy will complete its course validation efforts, the plan includes a time frame that CYBERCOM and the services can use for further discussion and planning purposes. The plans submitted by the Army and the Air Force indicate that the course validation time frames for phase two foundational courses are unknown because course validation is still dependent upon CYBERCOM’s review. The Army’s plan includes time frames for submitting to CYBERCOM two of the three courses it is responsible for developing, but one of the courses does not have any time frames. Further, the Air Force plan includes time frames for developing guidance on how to perform course validation that only carry it through September 2018; it does not have time frames for actually carrying out its course validation processes. As the military services assume phase two foundational training responsibilities from CYBERCOM, it is important that they coordinate with CYBERCOM to establish a timeline for course validation, as appropriate. With a clearer idea of which information can appropriately be removed from training courses, the services will be able to make informed decisions to balance the cost-effectiveness of the training with delivering trained cyber personnel to CMF teams more quickly. However, without an established time frame to assess and validate the efficiency and effectiveness of all phase two individual foundational training against established expectations, DOD will not be well positioned to reasonably assure that the phase two foundational training meets the needs of the CMF and its mission. The Military Services’ Plans Do Not Comprehensively Assess Personnel Training Requirements Training plans should be detailed enough to provide insight into the number of people needed to fill specific positions to sustain an organization. As part of the training transition process, CYBERCOM required the military services to submit implementation plans that identify, among other things, training requirements and execution. Also, according to our prior work published in Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government, training plans should be designed to determine the skills and competencies a workforce needs to prepare for current, emerging, and future agency needs in pursuit of its missions. These needs include the size of the workforce; its deployment across the organization; and the knowledge, skills, and abilities needed for the agency to pursue its current and future missions. To ensure a strategic workforce planning approach, it is important that agencies consider how hiring, training, and other human capital strategies can be aligned to support mission success. The Army, Navy, and Air Force developed training transition implementation plans to address training requirements and execution to some degree, but the plans do not identify the number of personnel or teams and the specific training activities needed across all phases of training to maintain the CMF. For example, neither the Army nor the Air Force plan identifies the number (average or total) of personnel for each of the work roles described in figure 2 (for example, cyber operators, intelligence analysts, linguists) that the military services need to complete phase two foundational training courses to maintain the appropriate sizing and deployment of personnel across CMF teams. Additionally, the Army and Air Force plans do not identify the number of personnel or teams needed to conduct phase three (collective) and phase four (sustainment) training in future years. In contrast, the Navy’s plan identifies the average number of personnel who would need to take specific phase two foundational courses—including those being developed by other services and CYBERCOM—to maintain its CMF teams. However, the Navy’s plan does not include this same information for phases three and four of training. The Marine Corps did not address training requirements and execution within its implementation plan. According to officials from the Army and the Air Force, the February 2018 documents they provided in response to CYBERCOM’s requirement do not include plans that identify training requirements because submission of that information was not required by CYBERCOM. However, a November 2017 CYBERCOM memorandum clearly directed the military services with joint curriculum lead responsibilities to submit plans that support implementation of the department’s CMF Training Transition Plan, including training requirements execution data. Having a comprehensive plan that identifies the number of personnel or teams needed to accomplish specific training activities would help the services to better manage the number of personnel who need to be rotated into the CMF teams. It would also help the military services coordinate with each other on course offerings by providing situational awareness of the number of personnel from other services who could attend their courses in any given year. For example, the Air Force would know how many Army, Navy, and Marine Corps personnel would attend the courses being offered by the Air Force. Without a plan that comprehensively assesses and identifies the services’ training needs for each type of personnel, DOD cannot reasonably ensure that its training plan will support the transition to a joint training model or be aligned with its stated goal to maintain a trained and ready force. CYBERCOM Was Unable to Provide a Plan for Establishing Independent Assessors for Phase Three Collective Training As of June 2018, CYBERCOM had not provided a plan for establishing independent assessors to evaluate and certify the completion of phase three collective training for CMF teams. CYBERCOM’s CMF Training and Readiness Manual explains that evaluations are necessary to assess readiness and provide commanders with a process to determine a team’s proficiency in the tasks it must perform during a mission. Assessors play an important role in this evaluation process by judging the performance of CMF teams using CYBERCOM’s evaluation forms, which establish common evaluation criteria to determine whether the team being evaluated has met the certification standards. CYBERCOM officials told us that to evaluate teams completing phase three certification through CYBERCOM events (approximately 50 percent, according to agency officials), the command provided a joint team of assessors. CYBERCOM and service officials told us that the services provided their own assessors for teams that completed phase three training through their respective service-hosted exercises. In discussions with Army and Air Force officials, they identified two challenges they have experienced with the services providing assessors to evaluate their own teams, which could lead to subjectivity in CMF team evaluations. First, in some instances the assessors have come from within the same chain of command as the CMF team and thus are not truly independent. Standards for Internal Control in the Federal Government discusses the importance of segregation of duties in designing control activities so that incompatible duties are segregated in order to mitigate the risk of management override of internal control. In this case, having an assessor from the same chain of command evaluate a CMF team’s performance in a certification event presents an increased risk of fraud through management override. Second, while the CMF Training and Readiness Manual includes checklists that assessors can use to evaluate team performance, according to service officials, the manual does not provide clear guidance on how to evaluate whether the tasks and performance standards have been sufficiently met by the team. The absence of such information could lead to subjective evaluations as to whether a team met the desired performance standard. According to one service official, these challenges could be addressed if CYBERCOM were to provide an expert who evaluates the training tasks and performance standards—an action that could lead to a more consistent application of evaluation criteria. When we asked officials from CYBERCOM’s training directorate about whether the command could provide more oversight for certification events, the officials acknowledged that, among other tasks, the command is responsible for ensuring that assigned joint cyber forces are trained, certified, and interoperable with other forces. The officials said that to do this, the command will use established training standards and develop a plan to train and certify CMF team evaluators to a set of standardized criteria. Command officials said they believe this will enable the services and CMF teams to have qualified assessors who are trained and certified by CYBERCOM to consistently evaluate the performance of the CMF teams based on joint standards. With this capability, for example, a Navy Cyber Protection Team assessor can be used by an Army Cyber Protection Team to evaluate that team in an operation, exercise, or training event. This training capacity should enhance the interoperability between the services and allow for consistent evaluation of a team’s performance. However, as of June 2018, CYBERCOM had not provided a plan to train and certify assessors from across the services; as such a plan had not yet been developed. Standards for Internal Control in the Federal Government explains that in defining objectives, management should clearly define what is to be achieved, how it will be achieved, and the time frames for achievement. Documenting these objectives in a plan also will help formalize the new process and ensure that the appropriate managerial emphasis is given to the effort. DOD has used similar mechanisms to implement changes to cyber training in the past, such as developing the CMF Training Transition Plan in response to moving phase two foundational training responsibility from CYBERCOM to the military services. Since phase three certification events act as a quality control mechanism for CMF teams, it is important that the events be independently evaluated to ensure that CMF teams are trained to a consistent standard. Without a documented plan to train and certify assessors to evaluate CMF phase three collective training certification events, the CMF teams will not be consistently evaluated as they are operationally certified. CYBERCOM Has Leveraged Other Cyber Experience to Meet Training Requirements, but It Has Not Established Master Training Task Lists for Courses CYBERCOM Has Established a Training Exemption Process for CMF Personnel Who Have Relevant Prior Experience CYBERCOM assesses the prior experience of CMF personnel to meet training requirements through a process known as individual training equivalency. This process allows personnel to be exempted from specific training courses by showing that they have already met the learning objectives of the course through their prior experience. CYBERCOM established an Individual Training Equivalency Board consisting of subject matter experts and representatives from CYBERCOM, the National Security Agency, and service cyber components who review the applications and recommend whether equivalency should be granted. The Individual Training Equivalency Board reviewed approximately 700 applications for equivalency from September 2013 through April 2018, and more than three-quarters of those applicants had at least one course exemption approved. According to officials from CYBERCOM’s training directorate, which is responsible for administering the individual equivalency process, there are a number of reasons why requests for course exemptions are not approved. For example, some applicants are denied for administrative reasons, such as not filling out the paperwork correctly. Also, applicants are not eligible to receive exemptions for courses that are not part of their work role requirements, but some personnel try to do so. Officials also said that board members do not deem some applicants’ reported experiences as comparable to the knowledge and skills they would obtain from taking courses for which they seek exemptions. Based on our review CYBERCOM’s memorandums that document the approval or disapproval of approximately 700 individual requests for training exemptions, we observed that applicants typically requested exemptions for multiple courses, with some seeking exemptions for up to 16 courses. Altogether during this period, we found that CYBERCOM granted more than 1,400 equivalencies for approximately 90 different phase two foundational training courses. Certain courses were exempted more often than others. For example, the course for which CYBERCOM most frequently granted individual exemptions was the Joint Advanced Cyber Warfare Course. This 4-week course provides an orientation to CYBERCOM, the global cryptologic platform, the intelligence community, and allies and major partners in the conduct of cyber warfare operations, planning, and analysis of effects. Other courses that were commonly granted training exemptions included 1-week courses related to computer network exploitation, cyber offensive and defensive operations, and understanding network and operating system fundamentals. These courses teach the basic skills associated with performing CMF operations. Additionally, we found that CYBERCOM’s Individual Training Equivalency Board approved approximately 50 exemptions for Intermediate Cyber Core, which is an 8- week course that CYBERCOM training officials described as providing the background and proficiency needed to identify, understand, and navigate the digital environment. The officials said that the course also provides an understanding of network operational methods and offensive and defensive cyber operation principles. CYBERCOM Has Not Established Master Training Task Lists for Courses CYBERCOM has not established master training task lists for phase two foundational training, a key set of standards the services are to use in preparing course equivalency standards. The task lists correlate to the knowledge, skills, and abilities that the services will use to develop learning objectives and course materials for training. They are also important in informing the services’ ability to make equivalency application determinations because they form the learning objectives of the courses that may be bypassed. To determine whether an applicant’s experience is equivalent to what would be taught in a course; the entity making the decision must know the learning objectives of the course. However, as of May 2018, CYBERCOM officials were unable to provide evidence that the command had developed master training task lists for phase two foundational CMF training courses, as required. The January 2017 CMF Training Transition Plan required CYBERCOM to provide all mission and support team master training task lists for the phase two foundational training courses to the military services no later than March of 2018. Service and CYBERCOM officials said that they are holding monthly meetings to provide updates related to the training standards and other training transition-related information, but as of May 2018, CYBERCOM officials had not confirmed that they had provided the master training task lists to the services. Officials from the services told us that they need these master training task lists to develop clear decision rules as they assume responsibility for making equivalency decisions for phase two foundational training courses. When we interviewed CYBERCOM in February of 2018, officials told us that they were not aware of the requirement established in the CMF Training Transition Plan, but said they would start developing the master training task lists. Establishing clear standards is particularly important at this time, because the services are scheduled to assume responsibility for administration of the individual training equivalency process for Cyber Protection Team phase two foundational training courses in October 2018. Until CYBERCOM establishes and disseminates the master training task lists for phase two foundational CMF courses, the military services are at risk of developing inconsistent decision rules for their training equivalency processes, and the development of such processes could be delayed, resulting in the funding of training that is unnecessary. Conclusions Developing and maintaining a trained cyber mission force is imperative to DOD’s ability to achieve its missions in the connected world within which it operates. DOD has made progress toward its goals of building and maintaining a trained cyber mission force. As DOD starts to focus on maintaining a ready CMF, addressing gaps in its training plans and structure will help it reach those goals. The Army’s and Air Force’s lack of time frames, like those established by the Navy in its implementation plan, for validating phase two foundational training could contribute to training inefficiency and unnecessarily long time frames for training personnel. Further, the military services, by not clearly identifying the number of personnel they need to train, hinder planning and coordination efforts to ensure that the training infrastructure is sufficient and is used efficiently. In addition, the absence of a plan for CYBERCOM to establish independent assessors for phase three collective training certification events may lead to teams being certified to different standards. Also, not having the master training task lists necessary to establish clear decision rules for granting individual training exemptions for phase two foundational training courses may contribute to inconsistent personnel skill levels and inefficient use of training resources. Focusing on maintaining sustainable readiness, as DOD has already begun to do, and addressing these weaknesses can lead to long-term improvements in the capability and capacity of its CMF. Recommendations for Executive Action We are making eight recommendations to DOD. The Secretary of Defense should ensure that the Army, in coordination with CYBERCOM and the National Cryptologic School, where appropriate, establish a time frame to validate all of the phase two foundational training courses for which it is responsible. (Recommendation 1) The Secretary of Defense should ensure that the Air Force, in coordination with CYBERCOM and the National Cryptologic School, where appropriate, establish a time frame to validate all of the phase two foundational training courses for which it is responsible. (Recommendation 2) The Secretary of the Army should ensure that Army Cyber Command coordinate with CYBERCOM to develop a plan that comprehensively assesses and identifies specific CMF training requirements for phases two (foundational), three (collective), and four (sustainment), in order to maintain the appropriate sizing and deployment of personnel across the Army’s CMF teams. (Recommendation 3) The Secretary of the Navy should ensure that Fleet Cyber Command coordinate with CYBERCOM to develop a plan that comprehensively assesses and identifies specific CMF training requirements for phases three (collective) and four (sustainment) in order to maintain the appropriate sizing and deployment of personnel across the Navy’s CMF teams. (Recommendation 4) The Secretary of the Air Force should ensure that Air Forces Cyber coordinate with CYBERCOM to develop a plan that comprehensively assesses and identifies specific CMF training requirements for phases two (foundational), three (collective), and four (sustainment), in order to maintain the appropriate sizing and deployment of personnel across the Air Force’s CMF teams. (Recommendation 5) The Commandant of the Marine Corps should ensure that Marine Corps Forces Cyberspace coordinate with CYBERCOM to develop a plan that comprehensively assesses and identifies specific CMF training requirements for phases two (foundational), three (collective), and four (sustainment), in order to maintain the appropriate sizing and deployment of personnel across the Marine Corps’ CMF teams. (Recommendation 6) The Secretary of Defense should ensure that the commander of CYBERCOM develops and documents a plan for establishing independent assessors to evaluate CMF phase three collective training certification events. (Recommendation 7) The Secretary of Defense should ensure that the commander of CYBERCOM establishes and disseminates the master training task lists covered by each phase two foundational training course and convey them to the military services, in accordance with the CMF Training Transition Plan. (Recommendation 8) Agency Comments We provided a draft of the FOUO version of this product to DOD for review and comment and worked with the department to develop this unclassified product. In its comments on the FOUO version of this, reproduced in appendix II, DOD concurred with our recommendations. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense, the office of the Principal Cyber Advisor, the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Deputy Assistant Secretary of Defense for Cyber Policy, the Commander of CYBERCOM, the leadership of each of the service cyber components, and the director of the National Security Agency’s National Cryptologic School. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Roles and Responsibilities for Cyber Mission Force Training Based on our review of related statutes, Department of Defense (DOD) instructions and directives, and other guidance, we found that various DOD officials have been assigned a variety of CMF training roles and responsibilities, summarized in table 1 below. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Tommy Baril, Assistant Director; Tracy Barnes; Patricia Farrell Donahue; Ashley Houston; Amie Lesser; Randy Neice; Geo Venegas; and Cheryl Weissman made key contributions to this report.
Why GAO Did This Study Developing a skilled cyber workforce is imperative to DOD achieving its offensive and defensive missions, and in 2013 it began developing CMF teams to fulfill these missions. CYBERCOM announced that the first wave of 133 such teams achieved full operational capability in May 2018. House Report 115-200 includes a provision for GAO to assess DOD's current and planned state of cyber training. GAO's report examines the extent to which DOD has (1) developed a trained CMF, (2) made plans to maintain a trained CMF, and (3) leveraged other cyber experience to meet training requirements for CMF personnel. To address these objectives, GAO reviewed DOD's cyber training standards, planning documents, and reports on CMF training; and interviewed DOD officials. This is an unclassified version of a For Official Use Only report that GAO previously issued. What GAO Found U.S. Cyber Command (CYBERCOM) has taken a number of steps—such as establishing consistent training standards—to develop its Cyber Mission Force (CMF) teams (see figure). To train CMF teams rapidly, CYBERCOM used existing resources where possible, such as the Navy's Joint Cyber Analysis Course and the National Security Agency's National Cryptologic School. As of November 2018, many of the 133 CMF teams that initially reported achieving full operational capability no longer had the full complement of trained personnel, and therefore did not meet CYBERCOM's readiness standards. This was caused by a number of factors, but CYBERCOM has since implemented new readiness procedures that emphasize readiness rather than achieving interim milestones, such as full operational capability. DOD has begun to shift focus from building to maintaining a trained CMF. The department developed a transition plan for the CMF that transfers foundational (phase two) training responsibility to the services. However, the Army and Air Force do not have time frames for required validation of foundational courses to CYBERCOM standards. Further, services' plans do not include all CMF training requirements, such as the numbers of personnel that need to be trained. Also, CYBERCOM does not have a plan to establish required independent assessors to ensure the consistency of collective (phase three) CMF training. Between 2013 and 2018, CMF personnel made approximately 700 requests for exemptions from training based on their experience, and about 85 percent of those applicants had at least one course exemption approved. However, GAO found that CYBERCOM has not established training task lists for foundational training courses. The services need these task lists to prepare appropriate course equivalency standards. What GAO Recommends GAO is making eight recommendations, including that the Army and Air Force identify time frames for validating foundational CMF courses; the military services develop CMF training plans with specific personnel requirements; CYBERCOM develop and document a plan establishing independent assessors to evaluate training; and CYBERCOM establish the training tasks covered by foundational training courses and convey them to the services. DOD concurred with the recommendations.
gao_GAO-20-244
gao_GAO-20-244_0
Background In 1992, the Prescription Drug User Fee Act (PDUFA) was enacted, in part, to provide additional funds for FDA to support the process of reviewing NDAs. PDUFA authorized FDA to collect user fees from drug sponsors to supplement its annual appropriation for salaries and expenses. PDUFA has been reauthorized every 5 years since 1992; most recently PDUFA VI reauthorized the prescription drug user fee program from fiscal year 2018 through fiscal year 2022. As part of each reauthorization process, FDA identifies goals in a commitment letter to Congress. In general, these goals identify a percentage of certain types of applications that FDA is expected to review within specified time frames, including goals for the time the agency takes to complete reviews of different types of NDAs upon initial submission and resubmission. For example, in its commitment letters for PDUFA V and VI, FDA committed to completing its initial review of 90 percent of priority NDAs that involve previously marketed or approved active ingredients within 6 months of receipt. As previously noted, four key features of NDAs are linked to drug development and review processes. For initial NDA reviews, the time frames for FDA’s review that would meet its PDUFA V and VI commitments—its PDUFA goals—vary and are linked to three key features of the NDA. (See table 1.) The target time frame for the initial review of any specific NDA under these user fee commitments reflects the goals associated with all three of the key features. The fourth key feature of NDAs is whether they qualify for one of FDA’s expedited programs. Whether designated as priority or standard, FDA may determine that NDAs for drugs intended to treat serious or life- threatening conditions qualify for development and review under one or more expedited programs. These programs confer specific benefits with the potential to help reduce the development or review time needed to bring a drug to market. For example, some expedited programs provide for more intensive drug development guidance from FDA officials or allow the applicant to submit completed sections of the NDA for review before submitting the entire application. FDA’s expedited programs include accelerated approval, breakthrough therapy designation, and fast track designation. (See table 2.) NDAs must include substantial evidence of a drug’s effectiveness, which is typically drawn from clinical trials. In traditional clinical trials, patients receiving a new drug are often compared with patients receiving a placebo or a different drug. To maximize data quality, these clinical trials are usually randomized (patients are randomly assigned to either the group receiving the new drug or a comparison group) and double-blinded (neither the patients nor the investigators know who is receiving a particular treatment). According to FDA, although this type of study design is often the most powerful tool for evaluating the safety and effectiveness of new drugs, many traditional clinical trials are becoming more costly and complex to administer. Additionally, according to FDA, many new drugs are not easily evaluated using traditional approaches. For example, drugs intended for patients with rare diseases are difficult to evaluate due to the limited number of patients affected by the disease and available for study. The Cures Act was enacted on December 13, 2016, to accelerate the discovery, development and delivery of new treatments—including drugs—for patients. Among other things, the Cures Act includes provisions for FDA to evaluate and facilitate the use of evidence from sources other than traditional clinical trials to support safety and effectiveness determinations for new drugs. For example, FDA was directed to evaluate the potential use of evidence based on data that is routinely collected outside of traditional clinical trials from sources such as electronic health records, medical claims data, and disease registries; evidence from such data sources is referred to as real-world evidence. In the commitment letter associated with PDUFA VI, which was enacted on August 18, 2017, the agency agreed to certain goals relating to the use of real-world evidence in regulatory decision-making and also agreed to certain activities intended to facilitate the development and application of an additional source of evidence known as model-informed drug development. Although these nontraditional sources of evidence were included in NDAs prior to the enactment of the Cures Act and PDUFA VI, at the time this legislation was enacted, most of them were not widely used. For example, according to FDA officials, the NDAs that included real-world evidence were generally for drugs to treat oncology diseases or rare diseases. FDA Divisions Differ in Proportions of NDAs Reviewed with One or More Key Features Our analysis of the 637 original NDAs submitted from fiscal years 2014 through 2018 indicates that divisions differed in the proportions of NDAs they reviewed that had any one of three key features that are linked to time frames for initial review under FDA’s PDUFA goals. As examples: 6 percent of the NDAs reviewed by the dermatology and dental division had a priority review designation, while 56 percent of the NDAs reviewed by the anti-infective division had a priority review designation; 4 percent of the NDAs reviewed by the anesthesia, analgesia, and addiction division involved a new molecular entity, while 52 percent of the NDAs reviewed by the neurology division involved one; and None of the NDAs reviewed by the transplant and ophthalmology division involved a major amendment, while 36 percent of the applications reviewed by the gastroenterology and inborn errors division involved one. (See fig. 1. App. IV provides more detailed information about differences between divisions in the number and proportion of NDAs with these key features.) We also found differences between divisions in the proportion of NDAs that they reviewed under an expedited program—the fourth key feature of NDAs. For example, none of the NDAs reviewed by the metabolism and endocrinology division qualified for one or more expedited programs, while 52 percent of the NDAs reviewed by the antiviral division qualified for one or more expedited programs. (See fig. 2. App. V provides more detailed information about differences between divisions in the number and proportion of NDAs that qualified for one or more expedited programs.) It is not unexpected that divisions differ in the proportion of their applications with key features linked to FDA’s time frames for review or qualification for expedited programs because the divisions are responsible for different products. For example, some divisions, such as the oncology divisions, regulate products for conditions that are more likely to be serious or life-threatening, and therefore the NDAs reviewed by these divisions are more likely to qualify for priority review designation and expedited programs, compared with other divisions, such as the dermatology and dental division. FDA Divisions Vary in Their Initial Review Times for NDAs, Largely Due to PDUFA Goals Our analysis of review times for the 637 original NDAs submitted from fiscal years 2014 through 2018 shows that FDA divisions differed in the number of days they took to complete their initial reviews. For example, the median time taken to complete an initial review of an NDA by the anti- infective division was about 2 months faster than the median time taken by the gastroenterology and inborn errors division. (For more information about initial review times, see app. VI.) We found, however, that these differences in initial review times largely reflected key features of the NDAs reviewed by the divisions, particularly those features linked to FDA’s time frames for review under its PDUFA goals. We analyzed initial review times using a statistical regression with two variables reflecting key features of the NDAs—target time frame for review of the application under FDA’s PDUFA goals (in days, from FDA’s receipt of the NDA to FDA’s targeted date for completion of the initial review) and number of expedited programs (0, 1, or 2 or more)—along with division as independent variables. We found that each of these variables was a significant determinant of initial review times. Specifically, our regression analysis shows that on average The shorter the target time frame for initial review of the NDA under FDA’s PDUFA goals, the shorter the initial review, and this target time frame was responsible for the majority of variation in initial review times. The greater the number of expedited programs for which the NDA qualified, the shorter the time FDA took to complete the initial review. Controlling for the effects of these key NDA features, however, we found that most of the divisions’ average review times were similar to (within 2 weeks of) each other. In contrast, the hematology and oncology divisions reviewed applications a bit more rapidly—about 2 or 3 weeks faster—than other divisions. Figure 3 illustrates the results of our analyses. The panel on the left shows the variation in the divisions’ actual average review times. The panel on the right shows the estimated average review times, after accounting for key application features, that is, what the review times would have been if each division had reviewed equal numbers of applications with these key features. We asked FDA officials what might contribute to somewhat faster review times by the hematology and oncology divisions, and FDA officials told us that a number of variables could have contributed to these differences. For example, the officials told us that applicants differ in their level of experience, which can affect the quality of the NDA or the speed of response to FDA’s requests for information; applications differ in complexity; and the oncology and hematology divisions could differ from others in their risk/benefit considerations. As previously noted, some divisions, such as the oncology divisions, regulate products for conditions that are more likely to be serious or life-threatening compared with other divisions, such as the dermatology and dental division, and risk/benefit considerations can differ across conditions that vary in how serious or life- threatening they are. For example, the potential benefits of drugs that carry substantial risks for dangerous side effects would likely be weighed differently if the drug is intended to address a life-threatening illness for which there is no other treatment than if the drug is intended to address an illness that is not life-threatening or for which there is an alternative treatment. FDA Is Implementing Initiatives to Evaluate and Facilitate the Use of Different Evidence Sources to Support NDAs FDA has several initiatives underway to evaluate and facilitate FDA review divisions’ and drug sponsors’ use of evidence derived from sources other than traditional clinical trials to support NDAs. (See table 3 for a description of these different evidence sources and each initiative.) According to FDA officials, implementing these initiatives can help ensure that when drug sponsors utilize these sources of evidence in NDAs, the evidence is of sufficient quality to be used in regulatory decision-making and that there is consistency across FDA review divisions in their evaluation of the evidence. FDA officials also said that although complex innovative trial designs might replace traditional clinical trials as evidence in NDAs, real-world evidence is more likely to be used to supplement clinical trial data. Although the initiatives are not restricted to any particular type of disease or patient population, according to FDA officials, some initiatives may be more relevant for certain types of diseases or patient populations than others. For example, according to FDA officials: real-world evidence may be most relevant for diseases that have outcomes that are consistently collected in the health care system. clinical outcome assessments (one aspect of patient-focused drug development) may be most relevant for diseases that are chronic, symptomatic, or affect functioning and activities of daily living. complex innovative trial designs may be most relevant for situations in which the population size is small or limited, such as pediatric populations, or where there is an unmet medical need, such as rare diseases. Our review of FDA documentation and interviews with FDA officials show that FDA has taken steps to implement each of these five initiatives. These steps include conducting public workshops with key stakeholders, issuing guidance for industry and FDA staff, initiating pilot programs, and developing FDA staff capacity, including by providing training and other educational resources. (See table 4 for examples of key activities by initiative.) These and future planned activities—including issuing additional guidance and revising relevant FDA policies and procedures— are intended to address deliverables for FDA to accomplish through 2021 that are outlined in the Cures Act and the PDUFA VI commitment letter. According to FDA officials, the agency intends to meet these deliverables, though, according to these officials, some of the activities implemented under the initiatives, such as certain pilot programs, will likely extend beyond 2021. Although implementation is still in progress for all of the initiatives, FDA officials reported some outcomes. For example, since the launch of the model-informed drug development pilot program, the agency has received two NDA supplements that incorporated model-informed drug development concepts discussed during pilot program meetings. Additionally, officials told us there has been a recent increase in investigational new drug submissions utilizing complex innovative trial designs. FDA officials also reported an increase in biomarker submissions under the drug development tool qualification program, and continued growth of the clinical outcome assessment qualification program. FDA expects that fully implementing the initiatives will lead to further increases in the use of evidence from sources other than traditional clinical trials. Agency Comments We provided a draft of this report to the Department of Health and Human Services for review and comment. The department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Methodology for Data Analyses To determine (1) how Food and Drug Administration (FDA) divisions differ in the proportion of new drug applications (NDA) they review with key features linked to review time goals and expedited programs and (2) how FDA review divisions differ in the time taken to complete initial reviews and the extent to which key features of NDAs contribute to those differences, we analyzed data from FDA. We also interviewed FDA officials about the data and their review processes. Data We obtained data regarding all NDAs submitted to FDA’s Center for Drug Evaluation and Research (CDER) from fiscal years 2014 through 2018. These data included information about features that distinguish NDAs from one another, including which division was responsible for the review. The data also included information through March 31, 2019, about the dates when FDA received and completed a review of each NDA, along with the target dates for completion of review under FDA’s goals in commitment letters associated with the Prescription Drug User Fee Act (PDUFA) reauthorizations for fiscal years 2013 through 2017 (PDUFA V) and fiscal years 2018 through 2022 (PDUFA VI). To ensure meaningful analysis of review times, we excluded NDAs for which FDA had not completed an initial cycle of review. Of 686 NDAs submitted in fiscal years 2014 through 2018, the applicant withdrew 10 NDAs prior to completion of FDA’s initial review and 39 NDAs were still under FDA review as of March 31, 2019, leaving 637 NDAs for which FDA had completed an initial review. To assess the reliability of these data, we conducted a series of electronic and logic tests to identify missing data or other anomalies. These analyses were informed by our review of relevant documentation and interviews with knowledgeable FDA officials. As part of our assessment of reliability, we worked with FDA to identify and correct information about certain NDAs in a small number of instances in which we identified discrepancies. Using these methods, we determined that the remaining data were sufficiently reliable for our purposes. Unless otherwise specified, the results we present are statistically significant at the 0.05 level. Proportions of NDAs with Key Features To determine how FDA divisions differ in the proportion of NDAs they review with key features linked to FDA’s time frames for initial reviews and expedited programs, we conducted a series of chi-square tests comparing the distributions of the 637 NDAs with and without specific features across divisions. These key features included: whether the NDA had a priority review designation (a designation applied by FDA if the product would provide a significant therapeutic improvement in the safety and effectiveness of the prevention, diagnosis, or treatment of a serious condition when compared to available drugs) or instead had a standard designation; whether the NDA did or did not involve a new molecular entity—an active ingredient that had not previously been marketed or approved for use as a drug in the United States, whether the NDA did or did not involve a major amendment (a submission, while a pending NDA is under FDA review, of additional information that may include a major new clinical safety or efficacy study report or major new analyses of studies, among other things); and whether the NDA did or did not qualify for an expedited program (accelerated approval, breakthrough therapy designation, or fast track designation), programs intended to help reduce the time involved in developing or reviewing certain drugs that have the potential to treat serious or life-threatening conditions. (See table 5 for relevant statistics from these chi-square tests.) Initial Review Times To determine how FDA review divisions differ in the time taken to complete initial reviews, we conducted a preliminary regression analysis of 637 NDAs with the number of days an FDA division took to complete its initial review as the dependent variable and division as a single independent variable. We defined the time to complete a review as the number of days from FDA’s receipt of the NDA to the agency’s completion of the initial review by taking regulatory action. To determine the extent to which key NDA features contributed to differences between divisions in the time taken to complete initial reviews, we conducted a multiple regression analysis of the number of days FDA took to complete its initial review with division as an independent variable, along with two other independent variables to control for the key NDA features: Target time frame for initial review of the NDA under FDA’s PDUFA goals. Three key NDA features are linked to time frames for FDA’s initial review under its PDUFA goals—whether the NDA was priority or standard, did or did not involve a new molecular entity, and did or did not involve a major amendment. To control for these three features simultaneously, we counted the number of days from FDA’s receipt of the NDA until FDA’s target date for completion of the initial review under FDA’s PDUFA goals, and used that variable—the target time frame for review under FDA’s PDUFA goals—as an independent variable. We identified five NDAs for which FDA’s review time was exceptionally long in comparison to the target time frame for review under its PDUFA goals, and we asked FDA officials about them. FDA officials stated that these reviews were substantially delayed because of complicated manufacturing site issues, complicated legal and regulatory issues, or emerging public health issues requiring last minute advisory committee meetings—conditions that we deemed sufficiently unusual to exclude these five NDAs from further statistical analyses of review times. Number of expedited programs for which the NDA qualified. Another key NDA feature is whether it qualified for one or more expedited programs, programs with the potential to help reduce the development or review time needed to bring a drug to market. We controlled for this feature by including number of expedited programs (0, 1, or 2 or more) as an independent variable in our multiple regression analysis. Thus, we tested the effect of division on initial review times for 632 NDAs while controlling for the target time frame for review under FDA’s PDUFA goals and qualification for expedited programs. (See tables 6 and 7 for relevant statistics from this multiple regression analysis.) Our multiple regression analysis allowed us to test a specific hypothesis about the effect of division on review times, namely, whether divisions differed in their review times after controlling for the key features of NDAs. This regression analysis did not test a model of review times—that is, we did not attempt to identify all variables that affect review times, nor did we seek to identify the specific set or combination of variables within our data that had maximum explanatory power. Our analyses indicated that variation remained in initial review times, even after we controlled for these variables. It is important to note that an array of factors might be expected to influence review times, including not just those factors that were captured in our analysis, but also factors such as state of the science and quality of the application. With data from 632 NDAs distributed unevenly across 15 divisions, meaningful tests of additional variables or their interactions were not possible. Nonetheless, we conducted exploratory analyses that included other potentially relevant variables in addition to the target time frame for review under FDA’s PDUFA goals, number of expedited programs, and division. In separate regression analyses, we examined (a) the fiscal year in which FDA received the NDA and (b) whether the application was a BLA, an NDA based on information from studies conducted by the applicant, or an NDA based on at least some information from studies not conducted by or for the applicant. We did not find evidence of a consistent effect of either of these additional factors on review times, but in light of the number of NDAs, we cannot exclude the possibility that one or more of these factors affects review times. In a third exploratory analysis, we examined the outcome of the initial review—(a) approval; (b) tentative approval, which FDA grants if the NDA meets requirements for approval, but cannot be approved due to a patent or exclusivity period for a listed drug; or (c) issuance of a letter to the applicant called a complete response letter, in which FDA describes the specific deficiencies the agency identified and recommends ways to make the application viable for approval. This analysis suggested that NDAs that were approved for marketing at the end of the initial cycle of review were reviewed slightly faster on average than other NDAs, but this result should be viewed with caution because a small number of NDAs with certain initial review outcomes were distributed unequally. For example, very few of the NDAs (11) reviewed through one or more expedited programs resulted in tentative approval. Appendix II: Total Times Taken by FDA Divisions to Review New Drug Applications Received in Fiscal Years 2014 through 2018 The Food and Drug Administration’s (FDA) Center for Drug Evaluation and Research (CDER) divisions differed in the total number of days they took to complete reviews of 637 new drug applications (NDA) submitted from fiscal years 2014 through 2018 and completed by March 31, 2019. (See fig. 4.) Importantly, these times reflect differences associated with the number of completed review cycles, FDA’s target time frames for review under its goals in commitment letters associated with the Prescription Drug User Fee Act (PDUFA) reauthorizations for fiscal years 2013 through 2017 (PDUFA V) and fiscal years 2018 through 2022 (PDUFA VI), and number of expedited programs. Number of review cycles. The number of cycles of review to which the NDAs we examined were subject was largely dependent on factors that were not under FDA’s control, namely, the applicant’s actions and timing. When a cycle of review ends with an FDA action, that action can be (a) approval, which allows the applicant to market the drug, (b) tentative approval, which FDA grants if the NDA meets requirements for approval, but cannot be approved due to a patent or exclusivity period for a listed drug, or (c) issuance of a letter to the applicant called a complete response letter, in which FDA describes the specific deficiencies the agency identified and recommends ways to make the application viable for approval. The applicant may respond to either tentative approval or a complete response letter by resubmitting a revised application, triggering a new cycle of review; it is up to the applicant to decide whether to resubmit the application. In addition, NDAs that were submitted earlier in time would have a greater chance of being resubmitted and reviewed by March 31, 2019, than applications submitted later in time. The number of completed review cycles ranged from one to four cycles: 637 NDAs went through a completed first (initial) cycle review; 99 of those 637 NDAs went through a completed second cycle review; 20 of those 99 NDAs went through a completed third cycle review; 3 of those 20 NDAs went through a completed fourth cycle review. Target time frames for review. Review times reflect differences in time frames for review under FDA’s PDUFA goals. The target time frames for review ranged from less than 6 months to 15 months for the first cycle and from less than 2 months to 9 months for later cycles of review. Number of expedited programs. These review times also reflect differences associated with the number of FDA’s expedited programs for which NDAs qualified. In general, these expedited programs are designed to help reduce the development or review time needed for drugs intended to treat serious or life-threatening conditions. Appendix III: Requests for Breakthrough Therapy and Fast Track Designations, Fiscal Years 2013 through 2018 Two of the Food and Drug Administration’s (FDA) expedited programs for new drugs intended to treat serious or life-threatening conditions— breakthrough therapy designation and fast track designation—must be requested by the drug sponsor. These programs are intended to help reduce the development or review time needed to bring a drug to market by offering benefits such as more intensive drug development guidance from FDA officials or by allowing the applicant to submit completed sections of the NDA for review before submitting the entire application. The request is normally made while the drug sponsor is conducting clinical trials or when seeking FDA’s permission to collect clinical trial data, although the request may also be made when submitting a new drug application (NDA) or while the NDA is under review. FDA’s Center for Drug Evaluation and Research (CDER) divisions are responsible for determining whether requests qualify for these expedited programs based on evidence the drug sponsors provide in support of the requests. To qualify for breakthrough therapy designation, the drug sponsor must present preliminary clinical evidence involving one or more clinically significant endpoints that indicate that the drug may demonstrate substantial improvement over available therapies. To qualify for fast track designation, the drug sponsor must either provide evidence demonstrating the drug’s potential to address unmet need or document that the drug is designated as a qualified infectious disease product. FDA may grant or deny the request, or the drug sponsor may withdraw the request before FDA renders a decision. If FDA grants the designation, the drug sponsor may subsequently withdraw from the designation, or FDA may rescind either designation if the drug no longer meets the qualifying criteria. We obtained data regarding all requests for breakthrough therapy and fast track designations submitted to CDER from fiscal years 2013 through 2018. These data included information about which division was responsible for the review and the outcome of the request—whether it was granted or denied or whether the drug sponsor withdrew the request before FDA reached a decision. To assess the reliability of these data, we conducted a series of electronic and logic tests to identify missing data or other anomalies. These analyses were informed by our review of relevant documentation and interviews with knowledgeable FDA officials. Using these methods, we determined that the data were sufficiently reliable for our purposes. We examined these data to determine whether there were any material differences between divisions in the frequency of possible outcomes. Our analyses focused on the outcomes and did not allow us to determine whether divisions differed in their application of the stated criteria. Breakthrough therapy designation. We found few differences across divisions in the frequency of the possible outcomes of requests for breakthrough therapy designation: Of 634 requests for breakthrough therapy designation (including nine requests submitted with or after the NDA submission), 39 percent were granted, 48 percent were denied, and 13 percent were withdrawn by the drug sponsor before FDA reached a decision. Divisions differed widely in the number of requests for breakthrough therapy designation they received, from 0 for the nonprescription drug division to 102 for one of FDA’s two oncology divisions. With two exceptions, the numbers of these requests that were granted, denied, or withdrawn for each division were similar to what would be expected based on the overall frequency of the possible outcomes. Requests to the hematology division were withdrawn more frequently than requests to other divisions (32 percent) and that division denied requests less frequently (17 percent) than other divisions. The neurology division denied more (81 percent), and granted fewer (13 percent), requests for breakthrough therapy designation than other divisions. Within the time period we studied, the drug sponsor withdrew from breakthrough therapy designation after it was granted in six cases and FDA rescinded the designation in 14 cases. Fast track designation. Similarly, we found few differences across divisions in the frequency of the possible outcomes of requests for fast track designation: Of 965 requests for fast track designation (including 35 requests submitted with or after the NDA submission), 71 percent were granted, 24 percent were denied, and 5 percent were withdrawn by the drug sponsor before FDA reached a decision. Again, divisions differed widely in the number of requests for fast track designation they received, from 2 for the nonprescription drug division to 133 for the neurology division. The numbers of these requests that were granted, denied, or withdrawn for each division were generally similar to what would be expected based on the overall frequency of the possible outcomes, although the anti-infective division granted more (91 percent), and denied fewer (6 percent), requests for fast track designation than other divisions. Within the time period we studied, no drug sponsor withdrew from fast track designation after it was granted, nor did FDA rescind any such designation. Appendix IV: New Drug Applications with Key Features Linked to Time Frames for Review, Fiscal Years 2014 through 2018 Pursuant to the Prescription Drug User Fee Act (PDUFA) and its subsequent reauthorizations, the Food and Drug Administration (FDA) collects user fees from drug sponsors to supplement its annual appropriation for salaries and expenses. As part of each reauthorization process, FDA identifies goals in a commitment letter to Congress, including goals for the time the agency takes to complete reviews of different types of drug applications upon initial submission and resubmission. In general, these goals identify a percentage of certain types applications that FDA is expected to review within specified target time frames. For initial NDA reviews—reviews of the NDA as originally submitted—FDA’s target time frames for review that would meet its PDUFA goals vary and are linked to three key NDA features that reflect the drug or the applicant’s action: (1) whether or not the application receives priority review designation, which indicates that the drug could provide significant therapeutic improvements in the safety and effectiveness of the prevention, diagnosis, or treatment of a serious condition when compared to available drugs; (2) whether or not the application involves a new molecular entity—an active ingredient that has not been previously marketed or approved for use in the United States; and (3) whether or not the applicant submitted a major amendment while the NDA was pending, that is, while under FDA’s review. The target time frame for review for any specific NDA reflects all three of these features. Reviews are conducted by one of the agency’s Center for Drug Evaluation and Research (CDER) divisions, each of which specialize in a specific group of drug products, such as hematology or neurology. As shown in table 8, divisions differed in the numbers and proportions of NDAs they reviewed that had the features linked to time frames for review under FDA’s PDUFA goals. Appendix V: New Drug Applications That Qualified for Expedited Programs, Fiscal Years 2014 through 2018 The Food and Drug Administration (FDA) may determine that NDAs for drugs intended to treat serious or life-threatening conditions qualify for one or more expedited programs. These programs confer specific benefits with the potential to help reduce the development or review time needed to bring a drug to market, for example, some expedited programs provide for more intensive drug development guidance from FDA officials or allow the applicant to submit completed sections of the NDA for review before submitting the entire application. FDA’s expedited programs include accelerated approval, breakthrough therapy designation, and fast track designation. Reviews are conducted by one of the agency’s Center for Drug Evaluation and Research (CDER) divisions, each of which specialize in a specific group of drug products, such as hematology or neurology. As shown in table 9, divisions differed in the proportions of NDAs they reviewed that qualified for expedited programs. Appendix VI: Times Taken to Complete Initial Reviews of New Drug Applications Received from Fiscal Year 2014 through 2018 The Food and Drug Administration’s (FDA) Center for Drug Evaluation and Research (CDER) divisions differed in the total number of days they took to complete initial reviews of new drug applications (NDA) received from fiscal years 2014 through 2018 and completed by March 31, 2019. (See fig. 5.) These review times reflect differences associated with FDA’s target time frames for initial review under its goals in commitment letters associated with the Prescription Drug User Fee Act (PDUFA) reauthorizations for fiscal years 2013 through 2017 (PDUFA V) and fiscal years 2018 through 2022 (PDUFA VI). These target time frames for review are linked to specific features of the NDA and ranged from less than 6 months to 15 months for the initial review. These review times also reflect differences associated with the number of expedited programs for which NDAs qualified. Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact John E. Dicken, (202) 512-7114 or dickenj@gao.gov. Staff Acknowledgments In addition to the contact named above, William Hadley (Assistant Director), Geri Redican-Bigott (Assistant Director), Aubrey Naffis (Analyst- in-Charge), and Kristen Joan Anderson made key contributions to this report. Also contributing were Sam Amrhein, Todd D. Anderson, Leia Dickerson, Kaitlin Farquharson, Rich Lipinski, and Ethiene Salgado- Rodriguez.
Why GAO Did This Study Before a drug can be marketed in the United States, FDA must determine that the drug is safe and effective for its intended use through a review of evidence that a drug sponsor—the entity seeking to market the drug—submits in an NDA. The review is conducted by one of FDA's divisions (17, at the time of GAO's review) that each specialize in a specific group of drug products, such as hematology products. NDA reviews are complex, and may involve not only an initial review, but also reviews of resubmissions if the initial review does not result in approval. Under FDA's PDUFA commitments, FDA's goal is to complete reviews of 90 percent of NDAs within specific time frames linked to key features of the NDAs. GAO was asked to examine NDA review times across FDA's divisions. In this report, GAO examines (among other things) differences between FDA divisions in the key features of the NDAs they review and initial review times, as well as the extent to which key NDA features contribute to these differences. GAO analyzed data from FDA's Center for Drug Evaluation and Research regarding 637 NDAs submitted from fiscal years 2014 through 2018. These data also included biologic license applications submitted to the center. GAO excluded NDAs that were withdrawn by the applicant before FDA completed a review, as well as NDAs for which FDA had not completed a review by March 31, 2019. GAO also interviewed FDA officials about the agency's review process and these review times. What GAO Found Four key features of new drug applications (NDA) are linked to the time the Food and Drug Administration (FDA) takes to complete initial reviews of NDAs. Three key NDA features determine the time frames for initial review that would meet FDA's goals under the Prescription Drug User Fee Act (PDUFA) and its reauthorizations, which authorize FDA to collect user fees from drug sponsors: Whether or not the NDA qualifies for the priority review program, which is generally an expedited program for drugs that provide significant therapeutic improvements in the prevention, diagnosis, or treatment of a serious condition when compared to available drugs. The PDUFA goal for review of a priority NDA is 4 months less than for an otherwise similar standard NDA, for which the goal is to complete the review in 10 months. Whether or not the NDA involves a new molecular entity (an active ingredient that has not been previously marketed or approved in the United States). The PDUFA goal for review of an NDA with a new molecular entity is 2 months longer than for an NDA without one. Whether or not the applicant submits a major amendment (additional or new information, such as a major new clinical study) while the NDA is under review. The PDUFA goal for a review of an NDA may be extended by 3 months if the applicant submits a major amendment. The fourth key NDA feature is whether or not it qualified for one or more of three other expedited programs for drugs intended to treat serious or life-threatening conditions. GAO's analysis of 637 NDAs submitted from fiscal years 2014 through 2018 indicated that the proportion of NDAs with these key features differed among FDA review divisions. For example, 6 percent of the NDAs reviewed by the dermatology and dental division had a priority designation, compared to 56 percent for the anti-infective division. FDA has reported that some divisions, such as the oncology divisions, generally regulate products for conditions that are more likely to be serious or life-threatening, and, therefore, those products may be more likely to qualify for priority designation and other expedited programs. GAO found that FDA's divisions differed in the average number of days they took to complete an initial review of NDAs, and these differences largely reflected the key features of the NDAs they reviewed. GAO's analysis shows that the time FDA took to complete an initial review of NDAs was affected by (1) the target time frame for completion of the review under the agency's PDUFA goals, (2) the number of expedited programs for which the NDA qualified, and (3) the division performing the review. GAO also found that the target time frame for review was largely responsible for differences in initial review times. Specifically, NDAs with key features that resulted in shorter target time frames for review under FDA's PDUFA goals had shorter initial review times. Controlling for the effects of these target time frames and the number of expedited programs for which the NDA qualified, GAO found that most of the divisions' average review times were similar to (within 2 weeks of) each other.
gao_GAO-20-342
gao_GAO-20-342_0
Background BOP Prisons and Population BOP is a component of DOJ and is responsible for housing male and female federal inmates in a controlled, safe, and humane prison environment while also providing a safe workplace for employees. BOP operates 122 prisons across the United States. These prisons are characterized by five security levels: high, medium, low, minimum, and administrative. Table 1 below provides a description of each of these security levels and the number of prisons at each. According to BOP data, in fiscal year 2019, BOP housed 149,701 inmates in its prisons. During this same time, the BOP employed 32,525 employees, of which 15,664 were correctional officers with responsibility for the day-to-day supervision of the inmates. BOP Issuance of Pepper Spray at Prisons According to a July 2012 BOP memorandum, BOP was approved to conduct a pilot study on pepper spray. The goals of the pilot were to increase the safety of staff and inmates when responding to incidents involving violence and to prevent injury to staff and inmates due to an assault or serious resistance to staff control. BOP began issuing pepper spray at high security prisons in August 2012 as part of its pilot study. In February 2015, BOP issued a program memorandum requiring employees in high, medium, and administrative security prisons to carry pepper spray. Further, in September 2018, BOP issued a program statement that expanded pepper spray to employees in low security prisons. Figure 1 provides a more detailed time line of events on the use of pepper spray in BOP prisons, including requirements under the Eric Williams Correctional Officer Protection Act of 2015. BOP Policies for Issuing and Using Pepper Spray, Providing Training, and Reporting Incidents Pepper spray is a natural inflammatory agent that can cause coughing, tearing, and discharge of excessive mucous when deployed in the facial region. According to BOP training guidance and policy, pepper spray is to be used in incidents that require an immediate use of force (for example, an unplanned use of force because of an attack on staff or an inmate) or a calculated use of force in which employees have time to coordinate their response (for example, when an inmate refuses to vacate his or her cell). For calculated uses of force, employees are to consult medical personnel to determine if an inmate has a medical condition that will exempt the inmate from being pepper sprayed. BOP policy states that employees should receive initial training on pepper spray and annual refresher training. In training, employees are taught effective tactical communication for using pepper spray; use of force policy; how to use pepper spray; and the decontamination process, among other topics. According to BOP’s Use of Force and Application of Restraints policy, a prison’s warden may authorize the use of chemical agents, such as pepper spray, only under the following situations: (1) the inmate is armed or barricaded; or, (2) the inmate cannot be approached without danger to self or others; and (3) it is determined that a delay in bringing the situation under control would constitute a serious hazard to the inmate or others or would result in a major disturbance or serious property damage. Pepper spray, moreover, should only be used when all other reasonable efforts to resolve a situation have failed. This policy further states that staff shall appropriately document incidents involving the use of pepper spray using BOP’s Form 583—Use of Force Report. Form 583 contains fields to enter the date and time of the incident; inmates and staff involved; injuries; medical reports; a description of the incident; and other information, such as the existence of video of the incident. The form is to be completed by the lieutenant on duty at the time of the incident and sequentially forwarded to the captain, assistant warden, warden, and regional office for review. After a Form 583 is completed, the warden, associate warden, health services administrator, and captain at the prison, collectively, conduct an after-action review of the incident to determine if the pepper spray was used in accordance with policy. Results of the after-action review are documented on BOP’s Form 586—After Action Report. According to BOP headquarters officials, in addition to documenting the results of the after- action review, a completed Form 586 often includes recommendations on how to improve the response to such incidents in the future. Incident data captured on Forms 583 and 586 are maintained in BOP’s TRUINTEL database. Protective Equipment Worn and Tools Used by BOP Employees To enhance BOP employee safety, BOP provides its employees with a variety of protective equipment. BOP generally requires employees working within the secure prison perimeter to carry a radio, body alarm, pepper spray (as appropriate), and keys while on duty. These items are usually checked out from the control center using a chit—a small, brass, circular token inscribed with the BOP employee’s first initial and last name. As of March 2020, some employees also wear stab-resistant vests to help enhance their safety. Although BOP employees are furnished with protective equipment, their first line of defense to protect themselves against an inmate is expected to be their verbal communication with the inmate. BOP policy, training documents, and officials state that effective communication with inmates is essential to officer safety. Figure 2 depicts some of the protective equipment worn by BOP employees operating within the secure prison perimeter of prisons. Issuance of Pepper Spray for Prison Employees Is Broadly Reported as Effective, and Agency-wide Costs of Pepper Spray Are Not Clear BOP Pilot Study and Staff Indicate That Pepper Spray Has Been Effective in Enhancing Safety of BOP Employees BOP conducted a pilot study on the issuance of pepper spray from August 2012 through December 2013 at selected high-security prisons. To conduct its study, BOP compared injury sustained by staff and inmates data from immediate use of force incidents in which pepper spray was used to similar incidents in which pepper spray was not used. BOP found that pepper spray was effective in helping to reduce containment time—the amount of time it takes to bring an incident under control—and injury rates. Specifically, containment time of incidents decreased from an average of 4.3 minutes when pepper spray was not used to 2.7 minutes when it was used. This is a reduction of 1.6 minutes in containment time; pepper spray was used mostly in incidents involving two or more inmates, such as fights and assaults. When pepper spray was used, the rate at which staff received no injury increased by 9 percent compared to when pepper spray was not used. Further, the rate at which staff received minor and moderate injury declined by 60 and 76 percent, respectively, compared to when pepper spray was not used; and the inmate injury rate rose slightly, by 2.6 percent, primarily in minor injuries when pepper spray was used; however, BOP concluded this change was not statistically significant. All 90 of the BOP employees we spoke with from United States Penitentiary Atlanta, Federal Correctional Complex Coleman, and Federal Medical Center Devens indicated that pepper spray has been effective in enhancing safety as well as deterring incidents. Generally, these employees noted that pepper spray (1) reduces staff injuries because staff do not have to physically engage with inmates as often to break up incidents, (2) strongly deters incidents from occurring, and (3) allows employees to break up incidents more quickly than if they did not have pepper spray. Pepper spray is not as effective for a small percentage of inmates, such as those with mental illness or those who are under the influence of drugs or alcohol, according to some BOP employees. According to BOP data, in 2018, pepper spray was used in 1,680 incidents as follows: 993 incidents in high security prisons; 557 incidents in medium security prisons; 22 incidents in low security prisons; and 108 incidents in administrative security prisons. Some Allegations of Inappropriate Use of Pepper Spray Have Been Resolved, while Others Remain Under Investigation Officials from BOP’s Office for Internal Affairs stated that 179 allegations of inappropriate use of force incidents that involved pepper spray were reported from August 2012 through September 2018. Among these cases, BOP’s Office for Internal Affairs has investigated and closed 86. Among these 86 closed cases, investigators found that 21 involved an inappropriate use of pepper spray and were adjudicated in various ways (see table 2). The remaining 93 allegations were still being investigated as of January 2020. BOP-wide Costs for Pepper Spray Are Relatively Low, and Some Costs Are Commingled with Other Expenses According to BOP data, the total cost for pepper spray–specifically the cost to purchase pepper spray canisters and train employees in its use— was approximately $300,000 in fiscal year 2018, which was relatively small compared to BOP’s overall budget. BOP headquarters officials told us that because pepper spray cost information is maintained at the prison level, it would be overly burdensome for them to independently validate the data. Nonetheless, the cost information we received provides a general sense on the extent of costs. Canisters. Officials estimated that a canister of pepper spray costs $7 to $14. Canisters of pepper spray have a shelf-life of approximately 5 years and, according to a BOP headquarters official, are purchased in bulk. As a result, pepper spray does not necessarily need to be purchased on an annual basis. According to BOP officials, each BOP prison contracts with its own supplier rather than using a national contract across all of BOP. BOP headquarters officials told us that pepper spray costs vary across vendors and locations, among other factors. Each BOP prison is responsible for recording and tracking its own budget data on the cost of procuring, training, and issuing BOP employees pepper spray. According to BOP officials, this approach is intended to lower the costs of pepper spray, based on the premise that each prison is able to secure the best market price for pepper spray for its location and for the volume of canisters needed from the vendor. Training. Prison officials told us that pepper spray refresher training is combined with other employee training, making it difficult for them to provide us with specific cost for pepper spray training. All BOP staff are required to take initial and annual refresher training on the use of pepper spray. The initial training lasts about 4 hours, while the annual refresher training lasts about 2 hours. BOP Decided Not to Issue Pepper Spray at Minimum Security Prisons, but Has Not Conducted an Analysis to Support Its Decision BOP issued a program statement in September 2018, which states that pepper spray is not to be issued to employees working at minimum security prisons. However, the senior BOP officials we interviewed—none of whom said they were involved directly in the policy decision—told us they do not believe the explanatory documentation of the decision to not issue pepper spray to minimum security prisons exists. Officials stated that the decision was likely made for several reasons: inmates at minimum security prisons are usually nonviolent offenders, incidents at minimum security prisons are usually very minor and do not require the use of pepper spray, the concern that public perception of using pepper spray on inmates at minimum security prisons would not be positive, and canisters of pepper spray would expire before they would be used at minimum security prisons. BOP officials we spoke with also stated that inmates at minimum security prisons are less likely than inmates at other security level prisons to become involved in incidents because they do not want to be reassigned to a higher security prison. We found, nonetheless, that BOP’s TRUINTEL database shows that incidents do occur at these prisons— some of which have led to assaults, minor injuries and death. Based on our analysis of BOP incident data from TRUINTEL, we found that in 2018 there were 47 reported incidents in the seven BOP minimum security prisons. These incidents included assaults on staff and other inmates; sexual harassment; and fighting, among others. Five of the incidents resulted in minor injuries to 10 BOP employees, and 18 incidents resulted in minor injuries to inmates. Further, one incident led to an inmate fatality. Additionally, during our site visits, 56 out of 73 officials across various security levels stated that deployment of pepper spray should be expanded to minimum security prisons because it would give employees an additional tool to protect their safety. BOP headquarters officials told us they believe the agency’s decision to not issue pepper spray to minimum security prisons remains appropriate. Regarding the 47 incidents that occurred at minimum security prisons in 2018, officials stated that many of the confrontational incidents occurring at these prisons can be handled using verbal commands. While a decision to not issue pepper spray at minimum security prisons may be justified based on an analysis of relevant information, BOP officials could not provide documentation of such analysis to support its decision. This analysis could include assessing available incident data at minimum security prisons and determining whether any of the incidents could have been prevented or handled more effectively if the officer on duty was carrying pepper spray. Additionally, BOP employee perspectives on issuing pepper spray at minimum security prisons is another possible source of relevant information that could be included in an analysis to inform BOP’s decision. BOP issued policies in 2015 and 2018 that stated that while the preferred method of resolving issues with inmates is through a verbal intervention, there are instances where other means will be required to restore order. In addition, the policies state that the safety of staff, inmate(s), or others in any dangerous encounter is paramount and that the use of force— including use of pepper spray—may be needed to ensure safety. According to Standards for Internal Control in the Federal Government, management should use quality information to make informed decisions and to evaluate the entity’s performance in achieving key objectives and addressing risks—in this case, the possible safety risks to BOP employees and inmates. By conducting an analysis on available BOP data on incidents that have occurred at minimum security prisons, employee perspectives on the value of having pepper spray at such prisons, and other relevant data, such as cost data, as appropriate, BOP would have useful data with which to inform its decision on whether or not to authorize pepper spray for employees at minimum security prisons. BOP Reported a Number of Challenges to Ensuring Officer Safety and Is Taking Steps to Help Mitigate Them BOP Officials at Selected Prisons Reported Challenges, including Understaffing and Inmate Drug Use, That Affect BOP Employee Safety Four BOP headquarters officials, 18 wardens and their executive staff, and 10 union officials rated the potential impact of 15 selected factors (see app. I) on the safety of BOP employees in prisons. BOP officials rated the following five factors as having the most significant impact on BOP employee safety in prisons: (1) corrections officer understaffing, (2) disruptive inmate behavior due to illegal drugs, (3) inmate use of unauthorized communication devices, (4) inmate gangs, and (5) insufficient corrections training. See figure 3 for a diagram of the top five factors identified across the different groups of BOP officials who responded to the structured questions. Across all three groups, corrections officer understaffing was rated among the top five factors. No other factor was equally represented. For at least two groups, inmate use of unauthorized communication devices, disruptive inmate behavior due to illegal drugs, and insufficient information-sharing among managers and staff were rated among the top five factors. When asked to identify any additional challenges beyond the selected factors we included, BOP officials we interviewed stated they were not aware of other challenges. BOP Headquarters and Prison-Level Officials Are Taking Steps to Address Reported Challenges BOP officials told us that they are taking steps to mitigate some of the challenges officials we interviewed indicated are impacting employee safety in prisons. Officials identified the following: Corrections officer understaffing. Corrections officer understaffing refers to the staffing level—usually measured by the inmate-to-staff ratio—being too low to adequately prevent violence and maintain a safe prison. Among the BOP headquarters officials, wardens and their executive staff, and union officials we interviewed, two underlying reasons generally cited for understaffing conditions were hiring freezes and difficulty recruiting new correctional officers due to low starting salaries. According to the BOP Director’s testimony before the Senate Judiciary Committee in November 2019, building adequate staffing at BOP prisons is one of her highest priorities. The Director stated that BOP established 10-percent recruitment, relocation, and retention incentives for hard-to-fill positions; established a higher entry pay scale for experienced new correctional established a 5-percent nationwide retention incentive for retirement- used 3,000 temporary positions to help ensure seamless succession planning by avoiding the lag to hire someone to fill a position. We issued a report in December 2017 on BOP’s use of retention incentives. At that time, we found that BOP had taken steps to determine workforce needs and how to fill those needs but had not strategically planned for and evaluated its use of retention incentives. We recommended that BOP include in its strategic human capital operating plan (1) human capital goals; and (2) strategies on how human capital flexibilities, including retention incentives, will be used to achieve these goals. We also recommended that BOP evaluate the effectiveness of its use of retention incentives to determine whether the incentives have helped achieve BOP’s human capital goals or if adjustments in retention incentives are needed. DOJ concurred, and BOP implemented our first recommendation by drafting a human capital plan with goals and strategies for how retention incentives could be used to meet those goals. To implement our second recommendation, BOP conducted an analysis of its use of retention incentives and their effect on retaining BOP employees. Disruptive inmate behavior due to illegal drugs. According to BOP officials, some inmates obtain illegal synthetic drugs by mail. These drugs are sprayed onto inmate mail and other documents before being sent to the inmate in prison. Inmates burn the mail to get high off of the synthetic drug. In addition to the threat to the inmate population posed by inmates who are behaving under the influence of the drugs, entry of these drugs can expose staff—including those handling the mail—-to hazardous chemicals. In an effort to stop illegal drugs from entering prisons by this method, according to BOP officials we spoke with and the BOP Director in her November 2019 testimony, some prisons are photocopying mail before it is delivered to inmates. For example, officials at one prison we visited told us they photocopy inmates’ mail. Further, a BOP headquarters official stated that BOP is piloting various mail-scanning technologies aimed at reducing the number of drugs entering prisons. Inmates’ use of unauthorized communication devices. According to BOP officials and the BOP Director’s testimony, inmates’ possession of cell phones is a major problem. BOP officials stated that, in an effort to stop the unauthorized use of cell phones, some prison officials are using specialized equipment to detect cell phone usage and are exploring options to use cell phone jammers. We reported in September 2011 that BOP and selected state officials told us that cell phones were a major security concern because they allow inmates to hold unmonitored conversations, for example, to sell drugs or harass individuals. We recommended that BOP’s Director formulate evaluation plans for cell phone detection technology to aid decision-making, require BOP staff to use these plans, and enhance regional collaboration with states. DOJ concurred with our recommendations, and BOP addressed them by developing policy and testing procedures to improve their ability to evaluate new technology. BOP also established plans to enhance collaborative information-sharing with state and local agencies on combating cell phone smuggling and use. Conclusions Working in a federal prison presents inherent risks. Since 2018, BOP has authorized the use of pepper spray at all prison security levels with the exception of minimum security prisons. BOP’s issuance of pepper spray was supported by evidentiary information—that is, its pilot study indicated that pepper spray was an effective tool for enhancing staff safety. Notably, BOP’s current policy on pepper spray allowance does not extend to minimum security prisons. While BOP was not able to provide us with a documented analysis behind the nonissuance to minimum security prisons, the officials we interviewed made several arguments in support of the decision. While their arguments may hold merit, we found evidence based on our limited analysis that appears to question their underlying decision. To the extent that officials are operating under assumptions not fully examined, BOP is missing a potential opportunity to enhance the safety of its correctional officers. We believe that our concerns are amplified by our finding that a majority of BOP frontline employees want pepper spray expanded to minimum security prisons. Similar to the decision to issue pepper spray to other levels was based on pilot information, BOP has an opportunity to bring—either for or against issuance—a better case forward. Analyzing available data on incidents that have occurred at minimum security prisons, such as determining whether any of them could have been prevented or handled more effectively with pepper spray, and considering BOP employees’ perspectives, BOP could inform its decision whether to authorize pepper spray for employees at these prisons. Recommendation for Executive Action We are making the following recommendation to BOP: The Director of BOP should conduct an analysis, using available incident and cost data, and other information as appropriate, to determine if the current decision to not issue pepper spray to minimum security prisons should remain in effect. (Recommendation 1) Agency Comments We provided a draft of this product to DOJ, including BOP, for review and comment. DOJ concurred with our recommendation and told us they had no comments on the draft report. DOJ did provide technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Attorney General, the BOP Director, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: BOP Employee and Officer Safety Structured Questions Throughout our audit work, we asked Bureau of Prisons (BOP) officials with whom we interviewed at the headquarters and selected prisons about factors that impact the safety of BOP employees, as well as efforts, if any, they had made to mitigate those factors. We specifically targeted three groups of BOP personnel—BOP headquarters, wardens and their executive staff, and union officials—to rate the impact of 15 selected factors on employee safety at the groups and by prison security level. We then analyzed their responses and identified the top five factors that these BOP officials identified as having an impact on employee safety. We received responses from four BOP headquarters officials, 18 wardens and their executive staff, and 10 union officials. Officials were provided the structured questions (see below) in advance of the site visit, and the team recorded their responses during the interview. We held one interview with four Bureau of Prisons (BOP) headquarters officials, nine interviews with 18 wardens and their executive staff, and seven interviews with 10 union officials about 15 selected factors that impact the safety of BOP employees, using a structured questions set (see app. I). These officials’ responses, which are broken down by group and security level, are presented in the figures below. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brett Fallavollita (Assistant Director), Sonja S. Ware (Analyst-in-Charge), Anthony DeFrank, and Emily Martin made key contributions to this report. Willie Commons III, Elizabeth Dretsch, Eric Hauswirth, and Susan Hsu also contributed to this work.
Why GAO Did This Study Within the Department of Justice, BOP is responsible for housing male and female federal inmates at 122 prisons in a safe environment for staff and inmates. Pepper spray is one of the methods BOP employees use to enhance their safety. The Eric Williams Correctional Officer Protection Act of 2015 includes a provision for GAO to examine certain matters related to the issuance of pepper spray to officers and employees in BOP prisons. This report addresses (1) what is known about the effectiveness and cost of issuing pepper spray in BOP's high, medium, low, and administrative security prisons; (2) BOP's position on expanding the issuance of pepper spray to minimum security prisons and the support used to make this decision; and (3) the challenges, if any, BOP officials identified as affecting the safety of BOP employees and the steps, if any, BOP has taken to address them. To address these objectives, GAO reviewed BOP policies, guidance, incident reports, and cost data on pepper spray use and interviewed knowledgeable officials at BOP headquarters and nine prisons at three locations, selected to represent varying security levels and other characteristics. What GAO Found Pepper spray is an effective tool for reducing the time needed to control incidents involving inmates and for reducing any related injury to Bureau of Prisons (BOP) employees, according to a 2012 BOP pilot study and BOP officials interviewed by GAO. BOP first issued pepper spray to employees in high security prisons in August 2012 and to medium, low, and administrative security prisons in subsequent years. Officials estimated that a canister of pepper spray costs $7 to $14. However, the total cost to purchase pepper spray and train employees on its use is not readily available because purchases are tracked at the prison level, and pepper spray training costs are commingled with other training costs. BOP determined that it would not issue pepper spray to minimum security prisons. BOP headquarters officials stated that this decision was made because inmates at such prisons are usually nonviolent offenders, among other reasons. However, GAO's analysis of BOP data found 47 reported incidents that included assaults on staff and other inmates across BOP's seven minimum security prisons in 2018. In addition, 56 of 73 officials GAO interviewed said pepper spray should be expanded to minimum security prisons. BOP officials stated they were not aware of an analysis of incident data or other information to support its decision but said that the decision remains appropriate. However, by analyzing available data on incidents that have occurred at minimum security prisons, BOP could better inform its decision on whether to issue pepper spray to employees at minimum security prisons. BOP officials rated the following factors as having the most significant impact on BOP employee safety, as shown in the figure below. BOP officials stated that they are taking steps to mitigate factors impacting safety. What GAO Recommends GAO recommends that BOP conduct an analysis to determine if its decision to not issue pepper spray to minimum security prisons should remain in effect. The Department of Justice concurred with the recommendation.
gao_GAO-20-28
gao_GAO-20-28_0
Background VR&E Eligibility and Process To be entitled to VR&E services and related benefits, veterans generally must (1) have at least a 20 percent service-connected disability rating from VA and (2) be in need of rehabilitation because of an employment handicap. Entitled veterans may generally receive up to 48 months of vocational rehabilitation services and up to an additional 18 months of employment services, which include counseling, and placement and postplacement services. If a veteran is entitled to receive VR&E services and found to be employable, a counselor is to work with the veteran to identify a suitable employment goal, and to incorporate that goal and the needed services and benefits to achieve it into a vocational rehabilitation and employment plan (hereafter “employment plan”). To develop an employment plan, the counselor and veteran review labor market information for jobs within the veteran’s identified abilities, aptitudes, and interests that will not aggravate his or her service-connected disability or disabilities. After assessing obstacles to employment, they agree on a written employment plan that describes the employment goal and the services needed to achieve it. Common services provided by VR&E are funding for higher education, career counseling, and short-term employment services like job search assistance. Counselors have the authority to approve a wide variety of educational programs and may approve employment plans that have an annual cost of up to $25,000. VR&E Organization Within VA’s Veterans Benefits Administration (VBA), the VR&E central office is responsible for overseeing the VR&E program, including training staff and monitoring their work to ensure high performance and consistency. Among other elements, VR&E’s quality assurance efforts entail reviewing a subset of case files on a monthly basis to ensure that the entitlement decisions, development of plans, and delivery of services are performed and documented in accordance with VA regulations, VR&E’s operations manual, and other directives. VR&E services are provided by field staff at 56 regional offices and about 300 satellite locations. The satellite locations include college campuses to help veterans successfully complete their training and find employment, as well as military sites to help servicemembers with disabilities as they begin their transition to veteran status and the civilian workplace. VR&E field positions include (1) VR&E officers who manage the program and its staff in each region; (2) vocational rehabilitation counselors who work directly with veterans to assess their entitlement, develop their employment plans, and manage their progress; and (3) staff to support the administration of the program. As of June 2019, 1,394 field staff members were administering the VR&E program, of which nearly 75 percent (1,026) were counselors. From September 2013 to June 2019, VR&E’s total caseload peaked in fiscal year 2016 with almost 135,000 participants (see fig. 1). Over the same period, the number of counselors changed little until 2019. In 2019, the number increased after VA hired an additional 88 counselors in response to a provision in an appropriations law suggesting that the agency aim to serve 125 veterans or fewer per full-time equivalent counselor. The increase in staffing helped reduce the average caseload of 130-141 cases per counselor during fiscal years 2013 through 2016 to 113 cases in June 2019 (see fig. 2). Counselors Generally Considered Common Factors When Developing Veterans’ Plans but Noted Inconsistent Application of Those Factors Likely Occurs Counselors in Our Review Generally Considered a Set of Common Factors When Developing Plans VR&E counselors consider a set of common factors, including the veteran’s disability or disabilities, interests, and local labor market conditions, when developing and approving veterans’ employment plans. Program regulations require an assessment of some of these factors when the veteran is initially evaluated. VR&E quality review data from fiscal years 2016 through 2018 suggest that counselors generally documented certain plan considerations during the evaluation. For example, in 98 percent of the 1,080 cases VA reviewed for accuracy in fiscal year 2018, counselors documented the veteran’s service needs based on their functional limitations. In 95 percent of cases, counselors documented that they assessed the veteran’s abilities, aptitudes, and interests. Lastly, in nearly 99 percent of cases, counselors documented that the veteran was involved in vocational exploration activities such as career searches and labor market research. During our more focused review of how counselors developed plans for a non-generalizable sample of 34 VR&E case files, we found that counselors generally documented a set of common factors. Consistent with program guidance stipulating that counselors are to consider a veteran’s service needs, abilities, aptitudes, and interests, we identified common consideration factors including one’s functional limitation from disability, prior education, aptitude results, and career interests. Table 1 presents these factors and the number of files in which the factors were documented. Our case file review found that 30 of the 34 counselors also documented the estimated cost of VR&E employment plans. According to testimony from a veteran service organization, many VR&E participants are dissuaded by their counselor from pursuing education at a top tier university because of cost. VA’s VR&E operations manual states that if more than one local training or educational facility will meet a veteran’s needs, counselors must justify their decision to select a school that is more expensive than the least costly one. Counselors are not required to document all of the educational facilities that would serve a veteran’s needs; therefore, we could not determine the extent to which counselors chose the lowest cost facility. Counselors we interviewed in each of the three regional offices we visited said that while mindful of cost, they strive to develop employment plans that best meet the needs of the veteran. For example, counselors at one regional office described a situation in which a higher priced school was chosen because the school offered smaller class sizes that better suited the veteran’s particular mental health conditions. Of the 34 files we reviewed, the annual plan cost exceeded $25,000 in 3 cases. Counselors we interviewed said that they considered the veteran’s career interests but weighed these interests against other factors, such as the veteran’s functional limitations and information about the local labor market. All 34 plans we reviewed aligned with the veteran’s stated career goals, though in some cases the veteran’s goals evolved after talking with the counselor about alternative occupations. In a few instances among these cases, the final plan’s career goal was notably different from the initial goal that the veteran had stated on the program intake form. Table 2 presents examples of how a plan can evolve as a result of career exploration activities and conversations between the veteran and their counselor. Counselors in Our Review Stated They Strive to Develop Individualized Plans but Acknowledged That Some Unintended Differences in Plans for Similarly-Situated Veterans Likely Occurs Counselors we interviewed described how veterans’ employment plans are individually designed to suit a veteran’s needs and, as a result, may differ from one another even when veterans have similar goals, characteristics, and circumstances. In some instances two veterans may appear to be similar, but may actually differ in some critical respect that results in appropriate variation across plans. A common difference among veterans is the geographical location where they are seeking employment. Counselors described how a veteran may be encouraged to explore an occupation with many job opportunities within a specific region, while a veteran with similar characteristics and interests living in a different area may be dissuaded from pursuing the same occupation for a lack of job opportunities in the area for that occupation. Local labor markets may also drive the need for a certain type of educational credential. For example, counselors said that some veterans will be competitive in certain labor markets with a bachelor’s degree, while others living in a different region with a more educated population may need a master’s degree. Likewise, they said that certain occupations, such as certified public accountants or school teachers, may require different forms of credentialing in different states. Other characteristics of individual veterans may also cause counselors to develop different plans for veterans who appear to have similar circumstances. One counselor we interviewed described a scenario in which one veteran who received a high score on an aptitude test for reading comprehension skills might obtain a certain employment plan while another veteran who received a much lower score would be steered toward a different plan. If the veterans were to compare their final plans, but were unaware of the differences in their aptitude test scores, they could perceive inconsistent treatment. Counselors also described how conversations they have with veterans as they work to develop employment plans can reveal other character traits, such as interpersonal skills, which can lead them to suggest different plans to two otherwise similar veterans. The counselors said that such conversations play an important role towards the development of successful plans. However, counselors we interviewed in each of the three regional offices we visited acknowledged that unintended variation likely occurs across plans developed for similarly-situated veterans. They explained that the reasons for such potential inconsistency can include (1) the prominent role professional judgment plays in the program and the potential for unintended bias, (2) counselors’ different VR&E experience levels, and (3) variations in regional offices’ policies. Judgment and bias. The counseling role is inherently subjective and requires counselors to use their professional judgment in each case. The VR&E operations manual describes counselors’ responsibilities in broad terms, stating that counselors are to guide and assist the veteran in making an informed decision on an appropriate plan based on the veteran’s abilities, aptitudes, and interests. According to counselors we interviewed, professional judgment enables them to develop a plan that is best suited for the veteran’s unique needs, although it also introduces the potential for personal bias and inconsistent plans for veterans with similar circumstances. For example, a counselor we interviewed cited a case in which he saw the need to develop a plan that allowed for a school that was closer to a veteran’s home over other, less costly options because of his sensitivity to the veteran’s childcare responsibilities. Another counselor in the same office may not have seen the need for that accommodation. Further, counselors we interviewed said that some of their colleagues may be more comfortable suggesting that a veteran reconsider his or her career goal given circumstances such as the veteran’s disabling conditions or the local labor market. They explained that while some counselors would be hesitant to make the veteran unhappy, and possibly angry, other counselors would be more inclined to work through the conflict. Counselors said that they try to mitigate inconsistency by asking their fellow counselors to weigh in on these sorts of judgments, either informally, or at periodic information-sharing meetings. Counselor experience. Although all counselors have at least a master’s degree in rehabilitation counseling or a related field, differences in counselors’ levels of VR&E experience may affect their approach to plan development. Counselors at two regional offices noted that the focus of the VR&E program has oscillated between education and employment, with employment being the current primary focus. They said, as a result, a counselor’s general approach to plan development could be influenced by the prevailing focus that existed at the time he or she was hired. Counselors also said that, in general, counselors with more experience will tend to approach plan development differently than a less seasoned counselor because they will apply lessons learned from serving many other veterans. For example, one counselor said that, based on years of prior experience and observation, he has developed a better understanding of which local educational programs offer veterans the best chance for success and which do not. He said while he is able to apply his institutional knowledge and experience to do what is best for veterans, a less experienced counselor may not have the same level of knowledge, which could lead to inconsistent plans for veterans with similar circumstances. According to counselors we interviewed, because of the recent hiring of new counselors to meet caseload targets, differences in VR&E experience among counselors may be more pronounced at this time. Regional office variation. Differences in administrative policies specific to individual regional offices may also contribute to inconsistent plan development. For example, according to program officials, to ensure the soundness of employment plans in the local labor market, some regional offices require management to approve plans involving a master’s degree, while others do not. Counselors in one region told us that requiring management approval might dissuade a counselor from developing a plan focused on a master’s degree because of the time the extra step would require. They acknowledged that this sort of approval policy could cause inconsistency across counselors’ plans and also cause a discrepancy in the number of master’s degree programs being approved at one regional office versus another. In general, the large number of variables involved in the development of employment plans may complicate the ability to determine the extent to which differences among counselors lead to inconsistent plans among veterans. Counselors we interviewed said that given the subjective nature of the program, such inconsistency is likely. However, counselors cautioned against making the plan development process overly structured and formulaic. In their view, a more restrictive approach would eliminate the flexibility that they need to generate plans that suit each veteran’s unique needs. VA Trains Counselors and Monitors Their Performance but Does Not Monitor the Consistency of Employment Plans VA Trains Counselors to Develop Sound Employment Plans for Veterans VA trains counselors on developing sound and complete employment plans for veterans. New counselors receive a series of training courses that are developed and deployed through VR&E’s central office, and then receive additional courses and mentorship that are delivered through the regional offices. Course topics for new counselors include understanding vocational impairments, developing a rehabilitation plan, and documenting a narrative of the plan. The formal training emphasizes that plans should be individualized to accommodate the veteran’s rehabilitation needs, abilities, aptitudes, and interests. Collectively, these trainings take up to 80 hours. As of 2019, experienced counselors—those on the job for at least a year—take up to 20 hours of refresher training each year determined according to how they score on an annual assessment. The assessment evaluates counselors’ technical competencies such as knowledge of relevant regulations, vocational assessment and evaluation, and case management. If a counselor scores low on a particular topic, related courses are identified for the counselor to complete. In designing training for counselors, VA followed principles for strategically developing training that are consistent with a related guide for federal managers. For instance, VA obtained and considered input from multiple sources—including field advisory committees, quality assurance reviewers, and internal site visit auditors—to identify needs for counselor training. For example, questions and input from the field about a policy clarification led to a training about veterans’ entitlement to VR&E services. In addition, VA built flexibility into its training curricula for counselors so they could receive training on emerging topics such as implementing new policies throughout the year as needed. VA also has evaluated its training efforts in multiple ways. For example, it has evaluated training courses by surveying counselors to get immediate feedback and by checking with attendees and their supervisors to gauge improvements in skills and knowledge. VA Checks If Plans Are Complete but Does Not Monitor Consistency among Counselors VA monitors employment plans to ensure that they are complete, but does not check for consistency among counselors for veterans with similar circumstances. Quality reviews occur nationally as well as locally at each regional office. The purpose of the national reviews is to monitor the quality of regional offices’ work such as plan development, whereas the purpose of the local reviews is to help evaluate the performance of individual counselors. Nationally, a centralized quality assurance team monitors the completeness of regional offices’ VR&E entitlement decisions, employment plans, and service delivery by reviewing a randomly-selected subset of case files from each regional office on a monthly basis. Among other criteria, reviewers check whether a veteran’s plan identified goals and objectives, included an employment focus, and incorporated the veteran’s need for various services. Locally, VR&E officers or their designees are to review plans using the same criteria. Officers are supposed to review at least three cases per counselor per quality category (e.g., accuracy of evaluation, planning and rehabilitation services) per quarter. Reviewers do not check for consistency among counselors for similarly-situated veterans, at either the national or local levels. VR&E officials we interviewed identified challenges to completing and monitoring local reviews, but VA is addressing these challenges. According to the VR&E officer in each of the three regional offices we visited, it is difficult to complete local reviews given system limitations and their other job responsibilities such as implementing case management initiatives. They said that it is likely that some officers are not completing the required reviews while others are conducting them with varying degrees of thoroughness. Historically, VA has not identified the specific cases VR&E officers are supposed to review locally. Consequently, VA could not determine if VR&E officers conducted the requisite number of reviews or whether officers were selecting cases for quality review uniformly and fairly. In June 2019, during the course of our review, VA began a pilot in five regional offices to centrally and systematically identify the cases officers are to review to gauge individual counselors’ performance. The new process and system are intended to help officers conduct and track local reviews as well as to help VA monitor the completion of local reviews. VA plans to expand this process to all regional offices in fiscal year 2020. Although VA trains counselors to develop complete employment plans and reviews the completeness of some plans, it does not monitor the consistency of plans among different counselors. The code of professional ethics for rehabilitation counselors calls for counselors to be fair in the treatment of all clients and to provide appropriate services to all. In addition, one of the objectives of VR&E’s central office is to provide training and guidance to ensure high performance and consistency among field staff. Several veteran service organizations have testified at congressional hearings that VR&E is marked by inconsistent treatment of similarly-situated veterans. For example, one testimony cited veterans who allegedly received different plan approvals, such as access to graduate level education, merely on the basis of their counselor. Unlike for VA staff members who work on disability claim decisions, VA does not compare the output of VR&E counselors by, for example, analyzing responses to identical hypothetical cases for training or monitoring purposes. As a result, in addition to missing a training opportunity for counselors about employment plan development, VA does not know the degree to which inconsistency among counselors occurs. For example, the agency does not know the extent to which counselors would agree to a particular veteran’s pursuit of a master’s degree through VR&E. Moreover, VA cannot respond in an informed way—and take mitigating steps if warranted—to criticisms of subjectivity in the program. VR&E officials explained that the agency has not yet conducted such a comparative analysis because of other priorities, but agreed that it could do so particularly through its training efforts. Conclusions VA uses several training and monitoring practices to help ensure that VR&E counselors develop employment plans that help veterans with disabilities obtain and sustain employment. In approving these plans, VR&E counselors use their judgment and discretion fostered in part by their formal education and professional experience in vocational rehabilitation. While our review of a non-generalizable sample of 34 cases found that counselors generally considered common factors in developing employment plans, counselors we interviewed nevertheless acknowledged that counselors may apply the factors differently because of their varying backgrounds and experience levels. The variability of counselors’ experiences and veterans’ circumstances may make it difficult to determine the full extent of any inconsistency. However, taking steps to examine the prevalence and type of any inconsistency among counselors who, for example, consider the same hypothetical case, would better position VA to mitigate any unfair differences in plans for veterans with similar circumstances. An understanding of how effectively and consistently counselors assist veterans will be even more important in the coming years as VA fully integrates the new counselors hired to decrease the average caseload. Recommendation for Executive Action The Secretary of VA should ensure that the Director of VR&E assesses the consistency of VR&E plans among counselors and takes mitigating steps if results warrant. For example, as part of its training efforts, VA could have counselors respond to identical hypothetical veteran cases and, if unfair inconsistencies in plans result, the agency could enhance training on plan development. (Recommendation 1) Agency Comments We provided a draft of this report to VA for comment, and its written comments are reproduced as appendix I in this report. VA concurred with our recommendation and said that VBA will develop a consistency study of VR&E plan development. It emphasized that no two veterans are the same. It also provided technical comments which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Mark Glickman (Assistant Director), Joel Green (Analyst in Charge), and David Perkins made significant contribution to the report. In addition, Jennifer Cook, Holly Dye, Alex Galuten, Monica Savoy, Mimi Nguyen, Almeta Spencer, Jeff Tessin, Rosemary Torres Lerma, and Sonya Vartivarian made key contributions.
Why GAO Did This Study VA's VR&E program helps veterans with service-connected disabilities obtain and maintain suitable employment. VR&E participants work with vocational counselors to develop career goals and employment plans. However, some veteran service organizations have questioned the consistency with which participants are treated by counselors in developing these plans. GAO was asked to review how VR&E vocational counselors work with participants to select employment plans, and VA's efforts to ensure high quality and consistency. This report examines (1) the factors that vocational counselors considered when developing VR&E participants' plans and how consistently they applied those factors, and (2) the extent to which VA trains and monitors vocational counselors to ensure a consistent, high-quality approach to helping veterans develop plans. GAO analyzed VR&E quality review data from fiscal years 2016 through 2018; reviewed a random, non-generalizable sample of 34 VR&E case files from 2019; reviewed relevant federal laws, regulations, and VA policy; and interviewed VR&E counselors and other program officials. What GAO Found The Department of Veterans Affairs' (VA) Vocational Rehabilitation and Employment (VR&E) counselors in GAO's review generally considered a set of common factors when developing plans to help veterans with disabilities obtain employment, but counselors explained that inconsistent application of those factors likely occurs. These factors included the veteran's disability, his or her interests, and local labor market conditions. The 34 VR&E plans GAO reviewed showed that counselors' generally considered and documented these factors (see table). Counselors in each of the three regional offices GAO visited said that plans are individualized to suit the veteran's needs and as a result will differ because each veteran's case is unique. Nonetheless, these counselors acknowledged that some veterans with similar circumstances likely receive different types of plans given differences in counselor judgment and experience. VA trains and monitors counselors to develop complete VR&E plans but does not assess the consistency of plans across counselors for veterans with similar circumstances. VA's training for VR&E counselors emphasizes that plans should accommodate each veteran's individual needs, abilities, aptitudes, and interests. In designing training for counselors, VA followed principles identified by GAO for strategically developing training. VA monitors the completeness of VR&E plans through national and regional quality reviews that check, among other elements, whether plans have an employment focus and include needed services. However, these quality reviews do not assess the consistency of plans developed by different counselors. VR&E officials explained that the agency has not yet conducted such an analysis because of other priorities, but agreed that it could do so. One of the objectives of VR&E's central office is to provide training and guidance to help ensure consistency among field staff. Assessing consistency across counselors would better position VA to mitigate any unfair differences in plans for similarly-situated veterans. What GAO Recommends GAO recommends that VA assess the consistency of VR&E plans among counselors by, for example, comparing counselors' responses to identical hypothetical cases, and take mitigating steps if warranted. VA concurred with the recommendation and planned to develop a consistency study.
gao_GAO-20-17
gao_GAO-20-17_0
Background Purpose of the LUCA Program A complete and accurate address list is the cornerstone of a successful census because it identifies all living quarters that are to receive a census questionnaire and serves as the control mechanism for following up with households that do not respond. If the address list is inaccurate, the Bureau may miss people, count them more than once, or include them in the wrong locations. As figure 1 shows, the Bureau’s approach to building complete and accurate address lists consists of a series of operations and are conducted throughout the decade. These operations include partnerships with the United States Postal Service (USPS) as well as tribal, state, and local governments. Other federal agencies, local planning organizations, the private sector, and nongovernmental entities may also contribute to these operations by providing the Bureau with updated address information as part of the Bureau’s continuous maintenance of the MAF. Like other information collected for the census, data collected through the LUCA program are subject to protections under title 13 of the U.S. Code. This means that data collected from the census cannot be used for non- statistical purposes or shared with unauthorized parties. The fundamental structure of LUCA has not changed since the Bureau first implemented it during the 2000 decennial cycle. The Bureau implements LUCA once every 10 years, near the end of the decennial census cycle. The Bureau invites governments to review the MAF for their respective areas. These governments must abide by Title 13 by protecting the address data from disclosure. Participating governments can then submit address updates for inclusion in the address list before enumeration. The Bureau can accept or reject these address updates, which participants then have the opportunity to appeal through an appeals office that OMB administers and that the Bureau funds (see figure 2). While the structure of the program is largely the same as in previous enumerations, the Bureau has made some changes to promote participation and reduce perceived participation barriers. For example, in 2010, the Bureau extended review timelines from 90 to 120 calendar days in response to LUCA participants’ feedback that they did not have enough resources to complete a sufficient review within the Bureau’s original time frame. Additionally, in the 2010 and 2020 cycles, the Bureau permitted state governments to participate in LUCA. State participation can provide coverage for local governments that may not have the resources to participate in the operation. Moreover, following the 2010 Census and in response to our prior recommendations, the Bureau assessed LUCA’s contribution to the final census population counts. Doing so improved the Bureau’s ability to determine how helpful LUCA was in gathering address information from participants across the nation. Procedures for Building the Address List and Counting Residents In September 2014, the Bureau decided that it would only need to verify addresses door to door in those areas it could not resolve with the aid of computer imagery and third-party data sources—what the Bureau calls in- office address canvassing. The Bureau used this method of address canvassing to reduce the costs of the labor-intensive “in-field address canvassing”, which cost about $450 million during the 2010 Census. As part of this effort, the Bureau planned to rely on in-office address canvassing as the primary method for validating address updates submitted during LUCA 2020. After the Bureau builds its address list, it must enumerate residents and follow up with them as necessary. Historically one of the most cost- intensive operations of the decennial census, the Bureau implements Non-response Follow-up after the self-response period so that it can (1) determine the occupancy status of individual nonresponsive housing units and (2) enumerate them. The Bureau allows up to six enumeration attempts for each nonresponsive housing unit or case. Any addresses added from LUCA submissions become eligible to be enumerated. Additional Sources of Address Data Other sources of address data complement the Bureau’s data-collection efforts. For instance, according to experts, systematic collection of address data is now common at the state and local level, which allows many governments to readily provide address information to the Bureau. Since 2013, the Bureau has also received address updates throughout the decade from the USPS as well as from tribal, state, and local governments through its Geographic Support System (GSS) Program, increasing the frequency of address updates. Outside of the auspices of Title 13-protected census data, states and federal agencies have worked toward making a national address database publicly available. For example, the National Address Database, managed by the U.S. Department of Transportation as part of its work with the Bureau on federal address data issues, is an open source database which enables governments to view and submit their address information, including geospatial coordinates, for use across governmental agencies. In 2015, we reported on the National Address Database and Title 13, suggesting that Congress consider assessing statutory limitations within Title 13 on address data to foster progress toward such a national address database. However, there has been no legislative action at the time of this report. The Bureau Generally Implemented LUCA in Accordance with Its Plan, but Some Decisions Increased Fieldwork The Bureau Met Nearly All Milestones, Conducted Outreach, and Obtained Participation According to Its Operational Plan We found the Bureau’s implementation of LUCA 2020 largely followed its operational plan, including key milestones, as well as outreach and training objectives. Milestones. Through July 2019, the Bureau had met its milestones laid out in the LUCA 2020 Operational Plan as summarized in table 1, with two minor changes that provided participating governments additional time. First, in starting up the program, the Bureau was able to mail out advance notice packages a month earlier than specified in the 2020 Operational Plan to give potential participants additional time to assess the resources they would need to participate before receiving the formal invitation. Secondly, the Bureau extended the deadline for participating governments to submit address updates because natural disasters affected large regions of the country. Outreach and training. The Bureau performed outreach and training according to its LUCA 2020 Operational Plan. For example, the Bureau provided technical training workshops for government representatives, including training on address privacy laws. The Bureau Implemented Its Planned Participation Options for LUCA, but the Bureau’s Participation Metric Excludes Useful Information The Bureau implemented a streamlined participation process and received address updates from participating governments covering 96 percent of the estimated population of the country. Based on the Bureau’s post-2010 recommendations to improve LUCA for the 2020 Census, the Bureau did not ask participants to provide their full address lists (an option in 2010), but invited governments to review only the Bureau’s address list and offer updates. As shown in table 2, the Bureau saw little change in the number of governments invited to participate, registering to participate, and responding from the 2010 Census. The changes in participation options prevent precise analysis of participation beyond counting the number of governments that responded in some fashion. Moreover, in 2000, the Bureau implemented LUCA with two phases of data collection—one for rural addresses and one for urban, with some governments eligible to provide address updates during both phases. This differs from later decennials which condensed LUCA into a single phase. However, the Bureau’s measure for government participation excludes important information about the degree of that participation. For instance, only 8,389—or 21 percent of the nearly 40,000 tribal, state, and local governments—participated in LUCA 2020. According to Bureau officials and subject matter specialists we interviewed, address data are generally improved when both a state and another level of government participate in LUCA, even if the respective address updates cover some of the same addresses. According to the Bureau, such redundancies can help address the possibility of coverage gaps in any one government’s address updates. Governments at the more local level can apply their targeted, on-the-ground intelligence in cases where a state government may lack the resources and data to cover the entire population as part of its review of the MAF. As figure 3 shows, the degree of local participation in LUCA varied greatly across the country. For example, while state governments in New Mexico and Oklahoma participated, many counties and local governments (e.g., towns and cities) within those states did not. Moreover, states like Texas and South Dakota lacked any form of coverage in LUCA for many of their counties. In contrast, large parts of the west coast and the southeast benefitted from participation in LUCA by governments at multiple levels. The Bureau maintains participation data on government type and shows information similar to figure 3 on its external website. However, the percentage of the population covered by at least one form of government submission—identified by the Bureau as a primary performance measure—does not identify participation in this way, nor does it distinguish between governments representing a mix of urban and rural geographic areas that have participated. Bureau officials told us that state-centric participation was a focus for LUCA 2020 and that they encouraged local governments to coordinate with state governments on their address lists. The purpose of the legislation that prompted LUCA was to help ensure accuracy of the census by permitting various levels of government to review the Bureau’s address data. We have previously reported that a program’s measures should be consistent with the program’s initial (or updated) statutory mission. The Census Address List Improvement Act of 1994 called for the Bureau to solicit input on the address list from tribal and local governments as well as state governments. The Bureau may be able to find opportunities to obtain more complete coverage by tracking metrics related to the types of governments participating in LUCA and the degree to which tribal, state, and local governments are complementing each other’s address updates. In doing so, the Bureau could ensure that the LUCA program is contributing to accurate enumeration. Tracking these metrics would also give the Bureau valuable feedback on the success of its nationwide outreach and could increase the accuracy of the MAF. The Bureau’s Design and Implementation of LUCA Address Validation Led to Additional Fieldwork Fieldwork in other 2020 Census operations increased as a result of (1) LUCA’s original operational design, and (2) subsequent implementation decisions the Bureau made in response to receiving a larger number of address updates than it expected from participants. By design, the Bureau had planned not to review suggested changes occurring in geographic areas previously determined to be high growth, since the Bureau had already planned to canvass such areas for addresses door- to-door later. When the Bureau received more than two million more address updates than it had expected, it decided to review a sample of updates in areas not slated automatically for in-field review, passing even more work directly on to Non-Response Follow-Up (NRFU) at a potential cost of more than $25 million (in constant 2020 dollars). The Bureau received 11 million address updates proposed by participating governments, but about 5.1 million of these did not match addresses in the MAF— approximately two million more than expected. Bureau officials had not formalized any specific estimates but initially expected that participants would propose about 5 million address updates to the MAF, of which about 2.8 million would not match to the MAF and would need to be reviewed. As figure 4 shows, 2.5 million of the 5.1 million new address updates that LUCA participants submitted were in high-growth areas and passed directly on to in-field address canvassing. While the Bureau’s reengineered approach to address canvassing for 2020 substantially reduced fieldwork, this pass-through of additional workload represents a missed opportunity for the Bureau to further reduce costs for in-field address canvassing. With a planned cost of $185 million (in fiscal year 2019 costs) for 2020, in-field address canvassing is one of the most expensive census operations, according to the Bureau’s July 2019 lifecycle cost estimate. Another decision also led to increased workload. The Bureau streamlined its address validation process in response to the higher-than-anticipated number of address updates received. To manage this workload, the Bureau reviewed only a sample of address updates suggested by governments with 200 or more addresses otherwise eligible for review (861,000 total updates out of 2.5 million) that were in areas not already flagged for in-field address canvassing. As a result, the Bureau added more than 1.6 million address updates to the MAF without review as shown above, even though they were eligible for in-office address canvassing. The Bureau will attempt to enumerate households during the census through self-response methods, such as online or paper questionnaires. If the Bureau does not initially receive responses, these addresses will become part of the NRFU workload. Had these addresses been canvassed in office, it is likely that many of them would have been rejected, based on the rejection rate for other addresses. Specifically, the Bureau rejected 39 percent (334,000 out of 861,000) of the address updates it reviewed in its sample. If a similar rate of rejection were to have occurred in both groups, roughly 624,000 additional address updates would have been rejected instead of being included in the enumeration universe with possible unnecessary NRFU follow up. Assuming the same average cost of NRFU per case as in 2010, these additional cases receiving census questionnaires could result in an unnecessary $25 million in costs (in constant 2020 dollars). Standards for Internal Control in the Federal Government indicates that agencies should use quality information to achieve their objectives. The Bureau’s decisions to limit the reviews conducted on submitted LUCA updates mean that the Bureau will have some addresses in the MAF for address canvassing and NRFU of unknown quality that will result in potentially unnecessary fieldwork. Creating the conditions whereby the Bureau can expand the scope of in-office review of tribal, state, and local additions to the MAF will better position the Bureau to reduce its fieldwork and related costs. The Bureau and OMB Expect to Receive Fewer Appealed Addresses, but Opportunities May Exist to Assess Outcomes of the Appeals Process The Census Address List Improvement Act of 1994 required that OMB establish a process to adjudicate differences between the Bureau and LUCA participants over proposed address updates to the MAF. The Bureau and the LUCA appeals office that OMB established will conduct the feedback and appeals phases of LUCA, respectively, from July 2019 through January 2020. Feedback to participants began in July 2019, and the subsequent appeals process is expected to run through January 2020. The Bureau and OMB expect fewer LUCA appeals for 2020 than in 2010 due in part to the Bureau’s decision to review only a portion of submitted address updates and provisionally accept the rest. In 2010, participants could appeal 13.3 million addresses, while according to the Bureau only about 1.7 million addresses will be eligible in 2020. According to OMB, as of mid-October 2019, the LUCA appeals office had begun processing files containing appealed addresses from 1,122 participants. Officials indicated the appeals office will not determine the total number and dispositions of addresses processed until after the end of the operation. As in 2010, OMB is giving participants 45 calendar days to appeal the Bureau’s individual address reviews. Since 2000, the LUCA appeals process has resulted in approval of more than 90 percent of LUCA appeals that participating governments have submitted, including more than 1.6 million appealed addresses (91 percent) in 2010. OMB officials noted that the practice for the appeals process is to side with the participants if the weight of evidence on either side of an appealed address is equal, which may account for the high percentage of approved appeals. OMB is replicating this practice for 2020, according to the final regulation establishing the LUCA appeals process in July 2019. Yet the Bureau’s post-2010 evaluation showed that, among all forms of late additions to the MAF, addresses that were reinstated to the MAF because of a LUCA appeal were the least likely to be found valid as either residential or commercial addresses. Ultimately, the Bureau enumerated individuals at 55 percent of such addresses for the 2010 Census (compared to 83 percent of addresses added late to the MAF through other operations). The 2010 LUCA appeals process resulted in the Bureau contacting and enumerating over 700,000 households that otherwise would be less likely to be enumerated, yet the high rate of erroneous addresses added to the MAF through appeals reinstatement will be an additional source of NRFU workload, making that operation more costly than necessary. Given that LUCA is one of several operations used to build the MAF, it is important for the Bureau to assess and determine how the high rate of LUCA address updates that are reinstated through the appeals process affect other operations and, thus, LUCA’s cost-effectiveness. Standards for Internal Control in the Federal Government states that management should use quality information to achieve the entity’s objectives. In its post-2010 evaluation, the Bureau acknowledged that it needed to research the reason for this seemingly low enumeration rate and to form a plan to resolve the cause. However, it has yet to do so. Evaluating the enumeration outcomes of appealed addresses and identifying factors that led to these results could help to reduce the cost of unnecessary enumeration attempts, as well as costs associated with the administration of the appeals process. The Bureau Lacks Data on Costs of Related Address List Development Efforts to Compare LUCA’s Cost- Effectiveness The Bureau provided us with estimates for what LUCA would cost for the 2020 Census, but it was unable to provide sums for other address- building operations. The Bureau estimates that LUCA 2020 operations will cost $29.6 million. Among other expenses, this includes certain information technology costs, printed materials for outreach, and salaries for Bureau staff and contractors throughout the decade. Beyond the LUCA operation, the Bureau has several other initiatives that provide information for the MAF, such as the USPS’s Delivery Sequence File and the GSS Program. According to Bureau cost documentation, these operations are funded through the Bureau’s Geographic Support Program at a level of $59 million annually since 2016. However, the Bureau does not isolate the costs of operations within the Geographic Support Program that may provide information on the relative cost- effectiveness of LUCA and related operations in updating the MAF. Bureau officials and stakeholders that we spoke with have cited the GSS initiative—which processes tribal, state, and local modifications to the MAF throughout the decade—as an alternative design for LUCA. Officials told us that costs for GSS are not tracked separately from other initiatives that update the MAF and the Bureau’s geocoding database. Standards for Internal Control in the Federal Government states that agencies should establish and operate monitoring activities, such as tracking program costs. Additionally, GAO’s 21st Century Challenges: Reexamining the Base of the Federal Government indicates that, to meet current and future challenges, it is important to evaluate whether programs are using the most cost-effective or net-beneficial approaches when compared to other tools and operation designs. Since the Bureau does not isolate costs specific to various design components it uses to build and update its address list, it is not possible to evaluate the relative cost-effectiveness of LUCA’s current design in the context of other address-list building the Bureau has undertaken for the 2020 Census. Identifying and tracking these costs would help the Bureau to determine the cost-effectiveness of its address-building activities and identify improvements. Opportunities Exist to Reexamine LUCA’s Role in the Decennial Observations from LUCA 2020 Identify Challenges for Future Implementation to Address While the Bureau largely implemented its approach for LUCA 2020 as planned, the Bureau missed several opportunities to maximize the benefits of LUCA toward improving the quality and reducing the cost of the census. Specifically, increased fieldwork, time for participants to review their address lists, and use of data on hard-to-count populations all emerged as challenges for the Bureau to address in any future implementation of LUCA or a similar program. Data from LUCA reviews could have helped administrative records modeling. In 2020, the Bureau is planning to use administrative records to reduce the amount of follow-up it does seeking responses from vacant or nonexistent addresses. Bureau officials noted that the Bureau learns information from its review of the quality of LUCA updates that could benefit its modeling with administrative records, perhaps resulting in more cases where administrative records are deemed good enough to reduce NRFU further. Standards for Internal Control in the Federal Government states that agencies should use quality information to achieve their objectives, in part by obtaining relevant data from reliable sources. The Bureau did not, however, plan to use information about addresses gathered during LUCA—such as during its reviews of address updates during LUCA validation—to help with its use of administrative records for the 2020 Census, nor determine how best, and when, to transfer data between the respective Bureau teams to make this happen. However, having information on the likelihood of addresses existing can help the Bureau tailor its strategy for following up with addresses that do not produce census responses. In addition, incorporating information learned about addresses added through the appeals process may also improve the results of the Bureau’s modeling with administrative records, which could in turn reduce workload during NRFU. Time constraints continue to limit participation. Officials of multiple participating governments and other subject matter specialists told us that the constrained timing of LUCA continues to be a barrier for governments to fully participate. For 2020 and in prior iterations of LUCA, insufficient time was one of the leading factors behind governments’ decisions not to participate. Our prior work on re-examining the base of the federal government highlights the importance of ensuring that a program is meeting its original purpose. Since its inception, LUCA has been intended to ensure that tribal, state, and local governments have the opportunity to review the Bureau’s decennial address list. In the 2010 Census, the Bureau increased the length of time governments had for reviewing the MAF from 90 days to 120 days, and kept this length for 2020. Yet, if governments lack the resources needed to review address lists, and if governments run out of time, they either may not participate, or their address updates may not reflect a comprehensive review of the MAF for their jurisdictions. Bureau officials agreed that more time for governments to participate would be better. Facilitating increased participation, along with expanding the scope of in-office reviews of LUCA submissions, however, may require the Bureau to realign its schedule for other phases of tribal, state, and local outreach. Figure 5 shows one potential opportunity for the Bureau to do this. The Bureau scheduled a 5-month gap between the end of its in-office address canvassing (and thus LUCA address validation) and the beginning of in-field address canvassing. Bureau officials said this period is needed to determine the right number of listers to hire and train, as well as to prepare official address materials needed for later operations. However, the 2020 schedule gave participants less time to submit updates than they could have had if the Bureau’s address validation phase had taken place later. Moreover, as previously noted, participants had from July 2017 to February 2018 to register for LUCA; officials noted that it could be possible to provide the review materials on a rolling basis so that participants who registered early could have more time to review their address lists. Finding opportunities like this to give participants more time for their review could improve the Bureau’s coverage. The Bureau did not use its data on hard-to-count areas to help guide LUCA. During LUCA 2020, the Bureau missed an opportunity to target efforts in order to improve address listing in areas considered by the Bureau to be hard-to-count. We have previously reported on the importance of targeting a program’s benefits to those with the greatest needs and the least capacity to meet those needs. The Bureau maintains publicly available data at the census tract level on the extent to which a geographic tract (roughly the population size of an urban neighborhood) is considered hard-to-count. Bureau officials told us, however, that they had not previously considered reviewing these data regularly when monitoring LUCA participation or prioritizing in-office review workloads. When an address is missing, the people at that address are more likely to be missed by the census. Bureau officials managing LUCA told us that using the Bureau’s data on hard-to-count areas could have given them insights into whether they were receiving LUCA participation for areas most in need of improvements in census coverage and whether they needed to better target their LUCA outreach. Moreover, Bureau officials told us that they would prefer to have more opportunity to provide feedback to participants regarding their submitted updates and their address lists. Given the time constraints discussed elsewhere in this report, data showing which participants are in hard-to-count areas could help the Bureau prioritize governments with which to invest time giving feedback. According to Bureau officials, this information could also help the Bureau prioritize its resources in other address list-building efforts, such as which areas the Bureau should conduct additional rounds of in-office address canvassing to ensure that recent address updates are not missed. The Bureau Faces Additional Issues When Reexamining the Role of LUCA for the 2030 Census Conditions surrounding LUCA have changed since LUCA was first implemented in the 2000 Census. For example, the dissemination of publicly available address data has increased, and the Bureau has developed other mechanisms for governments to provide input to its address list. However, LUCA’s designed role in the census has not fundamentally changed or been reexamined since its authorizing legislation. Moreover, the Bureau will soon begin its process for planning geographic programs for 2030. This presents an opportunity to reexamine LUCA’s contributions to building a complete and accurate address list. In 2005, we identified criteria for reexamining federal programs in order to address fiscal instability while updating federal programs and priorities to meet current and future challenges. These criteria are based on a need to inform Congress of our insights in order to help its budget and programmatic deliberations and oversight activities. These criteria include whether the program is using the most cost-effective approach when compared to other tools and program designs; whether a program is targeted to those with the greatest need; and what would be the likely consequences of eliminating an operation. Our review of Bureau documents and evaluations—along with interviews of Bureau officials, subject matter specialists, and state-level LUCA participant stakeholders—identified several issues for the Bureau to resolve with stakeholders, Congress, and other federal agencies as part of the planning process for the 2030 Census: Assessing whether LUCA should continue to have a role in building the address list. The first issue for the Bureau, Congress, and other stakeholders to resolve is whether LUCA should continue to be a vehicle for tribal, state, and local additions to the MAF. The Bureau receives intergovernmental inputs into the MAF through multiple sources, such as GSS and surveys of local governments to determine jurisdictional boundaries. The Bureau’s decisions on the scope of LUCA address validation for 2020 also mean that the effects of LUCA on address list quality are unclear. Yet, a committee of state- level stakeholders and subject matter specialists emphasized the value of having a forum for governments to review the Bureau’s address list—a feature that is currently unique to LUCA. By registering for LUCA under the authority of Title 13 nondisclosure requirements, governments can also receive feedback from the Bureau on their individual address updates, which the chair of a nationwide group of state-level population data officials told us was valuable. Moreover, stakeholders told us that having a program like LUCA late in the decennial cycle may help promote awareness of the census at the state and local level. Determining how frequently to have governments review the MAF. The method and frequency with which governments can review the MAF is another issue for the Bureau to resolve. A committee of state-level stakeholders and subject matter specialists told us that having more opportunities for tribal, state, and local review of the MAF during the decade would increase participation and thus quality of the MAF by relaxing the time constraints that have historically deterred participation in LUCA. Bureau officials also told us that a continuous program would provide more opportunities for governments to refine their address lists based on feedback from the Bureau. However, increasing the frequency of address updates, reviews, and appeals during the decade would increase program administration costs, and such a program’s design would need to account for the fact that smaller governments and LUCA nonparticipants already cite the lack of human and financial resources as a barrier to participation. Considering whether to make it easier for governments to access and share address data. Given the prevalence of modern address sources and services, the question of how closely to protect data on census addresses is another issue for the Bureau to resolve in conjunction with Congress and stakeholders. We have previously recommended that Congress consider revising Title 13 nondisclosure protections for address data. Bureau officials and subject matter specialists we interviewed said if federal agencies and tribal, state, and local governments could more easily share address lists, there could be benefits to address list quality. Bureau officials have also described scenarios in which it may be possible to enact targeted modifications to Title 13 so that only address data are affected. However, subject matter specialists we interviewed also noted that Title 13 protections can give reassurances to local residents and facilitate participation in building local address lists. Allowing widespread disclosure and use of the Bureau’s address list could also raise questions about which address lists are considered authoritative. Determining the role that a National Address Database should play in contributing to the Bureau’s address list. Deciding whether or how to leverage an existing publicly accessible address list as part of the Bureau’s decennial efforts is another issue to resolve. We have previously recommended that agencies responsible for interagency address and geospatial policy take actions to facilitate collection of national geospatial address data. First piloted in 2015 and now managed by the U.S. Department of Transportation (DOT), the National Address Database (NAD) provides publicly available address and geographic coordinates to government and non-government users. State-level stakeholders and DOT officials said a centralized, open-source form of address data would benefit public services, such as emergency response. Going forward, however, it will be important to address resource constraints that limit the NAD’s reach. DOT’s lead official for the NAD said that there are two permanent staff who oversee nationwide outreach and data collection, and at the time of this report, the NAD only has data from partners in 23 states. These issues have been prompted by developments that have taken place this decennial cycle, such as the development of the NAD and the advent of additional inputs into the MAF such as GSS; therefore, the Bureau has not yet had an opportunity to evaluate them in its decennial planning. Standards for Internal Control in the Federal Government underscores the need to identify, analyze, and respond to significant changes, as well as use quality information and communicate externally with stakeholders. With strategic planning for 2030 geographic programs in mind, the Bureau has an opportunity to engage with stakeholders, other federal agencies as appropriate, and Congress to resolve these issues and evaluate how various alternatives could impact the cost, quality, and public perception of the census. The above issues do not exist in isolation, however, and need to be resolved jointly. For instance, decisions to make address data more accessible would increase inter-agency data sharing and thus incentives for governments to participate in open-source address initiatives like the NAD. Decisions on whether to continue LUCA in its current form will affect the tools, such as GSS, available to tribal, state, and local governments to provide updates to the MAF. As the Bureau engages with affected partners on these issues, it will be important to consider various scenarios that could flow from resolving these issues in concert with each other. Conclusions The Bureau’s implementation of LUCA for 2020 is on track in terms of milestones thus far, and the process for governments to appeal rejected LUCA address updates is ongoing and will continue through January 2020. The Bureau also implemented planned changes to participation options for governments and tracked participation by government. However, the Bureau’s primary metric for representing the coverage of the nation by the LUCA operation does not leverage other information the Bureau already has on the degree of useful overlap in coverage across different levels of participating governments. Identifying and reporting metrics on the extent to which governments participating in LUCA overlap in their coverage of residents, as well as the characteristics of participants such as type of government and the nature of their geographic area, could provide more complete and useful feedback on the success of LUCA and assurance of getting desired coverage while avoiding gaps. We also found that opportunities exist for the Bureau to further reduce fieldwork and make its address list-building efforts more cost effective. In the future, the Bureau could more fully use its in-office address validation process for LUCA to reduce costs and improve decennial accuracy. Further, identifying the factors that lead to enumeration outcomes of the LUCA appeals process may also produce lessons learned that could help lower the amount of fieldwork and thus costs. Moreover, maintaining more detailed cost data for the Bureau’s other related address list development efforts will help position the Bureau to evaluate the relative cost-effectiveness of LUCA in building the address list. Likewise, the Bureau could also leverage the results of its in-office review of LUCA updates, as well as its evaluation of the appeals process, to inform its administrative records modeling and potentially reduce the number of required in-field NRFU visits. The Bureau can similarly take additional steps through programs like LUCA to promote greater coverage in the census. By realigning the schedule of LUCA where appropriate, the Bureau could give tribal, state, and local governments more time to review the address list in their areas and thus more time to provide quality updates to the Bureau. Moreover, using data on participation in LUCA and related programs, in concert with existing data on hard-to-count areas, would help the Bureau target its resources for building the address list and conducting decennial outreach to those areas most in need. We have also identified fundamental issues related to the Bureau’s address list activity that will require a forward-looking, stakeholder- inclusive approach for the Bureau to resolve. Re-examining LUCA and the related issues will not be easy, and could take time. The Bureau is uniquely positioned to lead the identification and assessment of what the alternatives are, and particularly how they might affect the cost and quality of the decennial census. Reporting out on the alternatives and their justifications, and developing legislative proposals, as may be appropriate, will help the Bureau, Congress, and the users of census data benefit from cost and quality improvements in decennials to come. Recommendations for Executive Action We are making the following eight recommendations to the Department of Commerce and the Census Bureau: The Secretary of Commerce should ensure that the Director of the Census Bureau identifies metrics on the extent to which governments participating in LUCA overlap in their coverage of residents, as well as the characteristics of participants such as type of government and geographic area, and reports on such metrics. (Recommendation 1) The Secretary of Commerce should ensure that the Director of the Census Bureau takes steps to conduct in-office reviews of a greater share of addresses submitted by governments before the addresses are added to the Bureau’s address list for potential field work. (Recommendation 2) The Secretary of Commerce should ensure that the Director of the Census Bureau, as part of the Bureau’s assessment of LUCA for 2020, consults with OMB to report on the factors that led to enumeration outcomes of addresses reinstated to the Bureau’s master address list by the LUCA appeals process. (Recommendation 3) The Secretary of Commerce should ensure that the Director of the Census Bureau identifies and tracks specific costs for related address list development efforts. (Recommendation 4) The Secretary of Commerce should ensure that the Director of the Census Bureau improves the use of LUCA results to inform procedures of other decennial operations, such as sharing information on address update quality to inform NRFU planning or administrative records modeling. (Recommendation 5) The Secretary of Commerce should ensure that the Director of the Census Bureau realigns the schedule of LUCA-related programs to provide participants with more time to review addresses. (Recommendation 6) The Secretary of Commerce should ensure that the Director of the Census Bureau uses the Bureau’s data on hard-to-count areas to inform geographic activities such as: targeting LUCA outreach to tribal, state, and local governments; planning additional rounds of in-office address canvassing; and providing feedback to tribal, state, and local governments on gaps in their respective address data. (Recommendation 7) The Secretary of Commerce should ensure that the Director of the Census Bureau, as part of the Bureau’s strategic planning process for geographic programs, reexamines LUCA in conjunction with stakeholders, other federal agencies as appropriate, and Congress to address the issues we have identified, including but not limited to: Identifying and assessing alternatives and describing corresponding effects on the decennial census. Reporting out on the assessment of alternatives, including justifications. Developing legislative proposals, as appropriate, for any changes needed to LUCA and address data in order to implement preferred alternatives. (Recommendation 8) Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of Commerce, the Acting Director of the Office of Management and Budget, and the Secretary of Transportation. In its written comments, reproduced in appendix I, the Department of Commerce agreed with our findings and recommendations and said it would develop an action plan to address them. The Department’s response also describes several claims of cost savings and efficiency gains attributable to various address list-building activities. While we have previously reported on the Census Bureau’s 2020 address list-building efforts, we have not audited claims made in the Department’s response or elsewhere regarding potential cost savings from innovations for the 2020 Census. The Census Bureau, Office of Management and Budget, and U.S. Department of Transportation each also provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Undersecretary of Economic Affairs, the Director of the U.S. Census Bureau, the Acting Director of the Office of Management and Budget, the Secretary of Transportation, and the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Commerce Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Ty Mitchell (Assistant Director), Devin Braun, Charles Culverwell, Rob Gebhart, Allison Gunn, Lisa Pearson, Kayla Robinson, Robert Robinson, Cynthia Saunders, and Peter Verchinski made significant contributions to this report.
Why GAO Did This Study A complete address list is a cornerstone of the Bureau's effort to conduct an accurate census. LUCA is one of several operations the Bureau uses to produce its address list. It gives tribal, state, and local governments the opportunity to review the address list for their areas and provide the Bureau with any updates before the census. GAO was asked to review the status of LUCA, including its effect on other operations, as well as LUCA's overall effectiveness and necessity. This report examines (1) LUCA's status and its likely effects on 2020 field operations, and (2) what considerations the Bureau and other stakeholders could use to reexamine LUCA for 2030. GAO reviewed Bureau plans, analyzed data from LUCA participation and the Bureau's review of submissions, and held 9 discussions on a possible reexamination of LUCA with relevant Bureau officials, a council representing participating governments, and census data subject matter specialists. What GAO Found The Census Bureau generally followed the operational design for its Local Update of Census Addresses (LUCA) program, which is intended to give tribal, state, and local governments the ability to review and offer modifications to the Bureau's Master Address File (MAF). The Bureau met milestones, apart from extending the participation window for natural disaster-stricken areas, and generally followed plans for outreach, training, and participation options. However, some decisions created additional fieldwork. The Bureau received more updates from participants than it expected, so it only reviewed roughly 860,000 of the 5.1 million updates that did not match to the MAF (see figure below). The rest will be added to potential fieldwork. Had more addresses been reviewed in-office, many may have been rejected, based on the rejection rate for reviewed addresses. Avoiding this unnecessary fieldwork could have saved the Bureau millions of dollars when following up with non-responding households. The Bureau has not reexamined LUCA with respect to the cost, quality, and public perception of the census since the program was authorized in 1994. Yet much has changed since then, from the tools the Bureau uses in building its address list to the provision of publicly accessible address data. As the Bureau turns to its strategic planning process for 2030, it will have several issues to address regarding the future of LUCA, including: whether LUCA should continue to have a role in building the address list given the advent of other address-building initiatives; how often to have governments review the MAF for the census, in light of the costs and benefits of administering such a program more frequently; whether statutory nondisclosure protection of census address data is still needed given that address data sources and services are more prevalent. What GAO Recommends GAO is making eight recommendations to the Department of Commerce, including that the Bureau ensure more LUCA submissions are reviewed and reexamine LUCA to address the related issues GAO identified as part of the Bureau's strategic planning process for the 2030 Census. The Department of Commerce agreed with our findings and recommendations and described several cost savings and efficiency gains—which we have not audited—from their related address list-building efforts. The Census Bureau, Office of Management and Budget, and U.S. Department of Transportation each also provided us with technical comments, which we incorporated as appropriate.
gao_GAO-20-464
gao_GAO-20-464_0
Background Contractors’ Subcontracting Pre- and Post-Award Responsibilities Federal law and regulations require that contractors receiving a contract with a value greater than the simplified acquisition threshold must ensure that small businesses have the “maximum practical opportunity” to receive subcontracting work. In addition, a prospective contractor generally must submit a subcontracting plan for each solicitation or contract modification with a value of more than $700,000—or $1.5 million for construction contracts—whenever subcontracting opportunities exist. Contractors with federal contracts typically use one of three types of subcontracting plans: Individual subcontracting plan, which applies to a specific contract, covers the entire contract period including option periods, and contains subcontracting goals; Commercial subcontracting plan, which covers the company’s fiscal year and the entire production of commercial items sold by either the entire company or a portion of it (such as a division, plant, or product line) and contains subcontracting goals; and Comprehensive subcontracting plan, which is similar to a commercial subcontracting plan and applies only to DOD contracts. Each company reports on subcontracting goals and achievements for a specific fiscal year on a plant, division, or corporate-wide basis. A comprehensive plan may cover a large number of individual contracts. Federal contractors use these plans to document subcontracting goals as a specific dollar amount planned for small business awards and as a percentage of total subcontracting dollars available to small businesses and socioeconomic categories of small businesses. Contractors also may establish, for specific facilities, a master subcontracting plan that contains all the required elements of an individual plan, except the subcontracting goals. Because a master plan does not include specific subcontracting goals, an individual subcontracting plan or an addendum typically provides the goals for specific contracts associated with the master subcontracting plan. After a contract is awarded, the contractor must periodically submit to the government a subcontracting report that describes progress towards meeting these goals. Individual subcontracting plans require reporting on a single contract and commercial and comprehensive subcontracting plans allow for consolidated reporting of multiple contracts on a division- or company-wide basis. Contractors must report their subcontracting achievements through eSRS, a web-based government-wide system that both contractors and agency contracting officers can access. The FAR requires contractors to submit individual subcontracting reports (ISR) and summary subcontract reports (SSR) (see table 1). These reports show contractors’ progress toward meeting their small business subcontracting goals. Contracting Officers’ Subcontracting Program Pre- and Post-Award Responsibilities Several regulations, processes, and procedures dictate contracting officers’ responsibilities for oversight of subcontracting plans during the pre-award and post-award phases of the acquisitions process. Before making an award, the FAR requires that contracting officers review the subcontracting plan to help ensure that the required information, goals, and assurances—such as a contractor committing to submit periodic reports to the government to determine the extent of compliance with the subcontracting plan—are included. Additionally, the FAR requires contracting officers to provide the SBA Procurement Center Representative (PCR)—SBA staff whose responsibility includes supporting agency contracting opportunities for small businesses—with an opportunity to review the proposed contract, including the subcontracting plan and supporting documentation. After a contract or contract modification containing a subcontracting plan is awarded or an existing subcontracting plan is amended, the FAR requires that contracting officers monitor the prime contractor’s compliance with its subcontracting plan. In carrying out their post-award oversight responsibilities, the FAR requires contracting officers to (1) ensure contractors file their subcontracting reports in eSRS within 30 days of the close of each reporting period (a report is also required for each contract within 30 days of contract completion); (2) review ISRs, and where applicable SSRs, in eSRS within 60 days of the reporting end date; and (3) acknowledge receipt of, accept, or reject the reports in eSRS (see fig.1). The FAR requires agencies to perform annual evaluations of and report on a contractor’s performance when work under the contract has been completed. Small business subcontracting is one evaluation area for which agencies rate a contractor’s performance. Agencies use the Contractor Performance Assessment Reporting System to collect and manage the library of Contractor Performance Assessment Reports. Agency contracting officers are to consider information on a contractor’s past performance from these reports when making future contract award decisions, including a contractor’s actions for previously awarded contracts that had a small business subcontracting plan. The FAR also requires contractors to comply in good faith with the agreed-upon subcontracting plan goals and requirements. When a contractor fails to meet the small business goals in the subcontracting plan, the contractor must provide a rationale for not being able to meet the goals. In determining whether a contractor failed to make a good-faith effort, a contracting officer must look at the totality of the contractor’s actions, consistent with the information and assurances provided in its subcontracting plan, and consider the rationale the contractor provided. The contractor’s failure to meet its subcontracting goals does not, in and of itself, constitute failure to make a good-faith effort. Failure to submit required subcontracting reports as required by the FAR also may factor into contracting officers’ determinations. If a contracting officer determined that a contractor failed to make a good-faith effort, the FAR requires the contracting officer to assess liquidated damages (monetary assessments for breaching the contract) against the contractor. SBA’s Role in Subcontracting SBA’s Office of Government Contracting administers SBA’s subcontracting assistance program. In this office, headquarters and field staff have responsibilities to assist small businesses in meeting requirements to receive government contracts as subcontractors. SBA staff have related responsibilities in both the pre- and post-award acquisition phases. For example, SBA’s PCRs and Commercial Market Representatives (CMR) play a role in helping to ensure that small businesses gain access to subcontracting opportunities. In particular, a PCR’s key responsibilities include reviewing proposed agency contracts and making recommendations to agency contracting officers. PCRs also review proposed subcontracting plans and provide advice and recommendations on them to contracting officers. Key responsibilities of CMRs include counseling small businesses on obtaining subcontracts and conducting reviews, including compliance reviews, of large prime contractors with subcontracting plans. SBA’s standard operating procedure (SOP) for the subcontracting assistance program provides guidance for how CMRs conduct reviews. Although SBA conducts monitoring activities, the awarding federal agency remains responsible for overseeing and enforcing compliance with a subcontracting plan throughout the life of the contract. In the case of DOD, in addition to the components within the agency that award and monitor contracts, the Defense Contract Management Agency (DCMA) also provides contract administration services for certain DOD contracts. SBA and DCMA may conduct compliance reviews jointly to evaluate prime contractor subcontracting programs supporting specific DOD contracts administered by DCMA. SBA is also authorized to enter into agreements with other federal agencies to conduct compliance reviews and further the objectives of the subcontracting program. We discuss SBA oversight in more detail later in the report. Agency Small Business Subcontracting Goals Annually, SBA negotiates with agencies to establish individual small business subcontracting goals based on recent subcontracting achievement levels by each agency. Agencies awarding contracts with small business subcontracting plans aim to provide opportunities to promote the use of small businesses, veteran-owned small businesses, service-disabled veteran-owned small businesses, Historically Underutilized Business Zone small businesses, small disadvantaged small businesses, and women-owned small businesses. These efforts can help agencies achieve their individual small business subcontracting goals. Selected Agencies Could Not Demonstrate They Consistently Implemented All Required Pre-Award Procedures for Subcontracting Plans The four agencies we reviewed—DLA, GSA, Navy, and NASA— demonstrated that contracting officers reviewed and approved subcontracting plans in most of the contracts in our sample. However, they could not demonstrate they followed procedures for making a determination of subcontracting possibilities for all of the contracts we reviewed without a subcontracting plan. Agencies also could not demonstrate they followed procedures related to PCR reviews in about half of the contracts reviewed. Selected Agencies Generally Demonstrated That Contracting Officers Reviewed and Approved Subcontracting Plans Review and Approval of Subcontracting Plans Mostly Documented The four agencies provided documentation to show that contracting officers reviewed and approved subcontracting plans in most of the 26 contracts that had subcontracting plans. FAR §§ 19.705-4 and 19.705-5 contain contracting officer responsibilities related to reviewing a proposed subcontracting plan and determining its acceptability. For 25 of the 26 contracts we reviewed with a subcontracting plan, the agencies provided documentation showing the contracting officer reviewed the subcontracting plan. In some instances, we also found specific agency guidance for, and checklists or memorandums documenting the reviews of, subcontracting plans. For example: GSA has guidance for its contracting officers when reviewing subcontracting plans. Specifically, GSA’s Acquisition Manual includes a checklist for reviewing subcontracting plans and ensuring the plans meet FAR requirements. Contracting officers used the checklist in their reviews for five of the six GSA contracts we reviewed with a subcontracting plan. The checklist also documents whether the total planned subcontracting dollars and percentages, the method for developing these goals, and information about supplies or services that will be subcontracted are acceptable to the contracting officer. DOD’s guidance on subcontracting program business rules and processes contains a specific DOD checklist for subcontracting plan reviews. Contracting officers used the DOD checklist for three of 14 DLA and Navy contracts with a subcontracting plan that we reviewed. In addition to documenting the extent to which a subcontracting plan meets FAR and Defense Federal Acquisition Regulation Supplement requirements, the checklist also reflects certain requirements related to master and commercial subcontracting plans. The checklist is optional for contracting officers to use when reviewing subcontracting plans. NASA also has guidance that includes steps contracting officers should take when conducting subcontracting plan reviews. For two of the six NASA contracts with a subcontracting plan that we reviewed, we found a checklist that the contracting officer used or a memorandum the contracting officer prepared that detailed the subcontracting plan review, including proposed subcontracting goals. For almost all the contracts we reviewed that did not have a specific checklist or memorandum to document the contracting officer’s review, we found other evidence, such as a contracting officer’s signature on the subcontracting plan, acknowledging review of the plan. Additionally, for one Navy contract with a contract award value of more than $13 million and with an individual subcontracting plan, we found evidence that, after reviewing the subcontracting plan, the contracting officer requested that the contractor make corrections to it. For one DLA contract we reviewed, based on the limited documentation provided, we were unable to determine the extent to which the subcontracting plan was reviewed. DLA officials stated at the time of our review that they were unable to determine if the subcontracting plan was reviewed. We also obtained documentation that demonstrated the subcontracting plan was approved for most of the contracts—21 of 26—we reviewed with a subcontracting plan. For example, we obtained documentation with the contracting officer’s signature on the subcontracting plan (approving the plan), the contracting officer’s signature approving the contract (which included the subcontracting plan), or a signed memorandum that documented approval of the plan. However, we identified five contracts across DLA, Navy, and GSA that had limited documentation (three contracts) for approval of the subcontracting plan, or for which we could not determine whether the subcontracting plan was approved (two contracts). For one DLA contract with an award amount of $15 million and with an individual subcontracting plan, we were unable to determine if the subcontracting plan was approved. Documentation we reviewed, including DLA emails, did not indicate whether the subcontracting plan was approved. In our review of the subcontracting plan, the section of the plan documenting its approval was not completed. Additionally, according to DLA officials, the contract file does not contain any record of the contracting officer’s signature on the subcontracting plan. For two Navy contracts with award amounts of about $17 million and about $32 million and both with individual subcontracting plans, we found limited documentation demonstrating approval of the subcontracting plan for the first contract and, based on the lack of documentation, were unable to determine if the second contract was approved. For the first contract, we found a checklist with signatures demonstrating review of the subcontracting plan by the contracting officer and other officials. However, the subcontracting plan was not signed by the contracting officer as the approval/signature field in the subcontracting plan was empty. For the other contract, Navy officials could not provide any documentation showing approval of the subcontracting plan. The subcontracting plan was not signed by a Navy contracting officer or other Navy staff, and according to Navy officials, they were unable to find a signed subcontracting plan in the pre-award contract file. For two GSA contracts with individual subcontracting plans, we also found limited documentation approving the subcontracting plan. Similar to one of the Navy contracts discussed above, we found checklists with signatures demonstrating reviews of the subcontracting plan by the contracting officer and other officials. However, in both of these instances, the contracting officer did not sign the approval section of the subcontracting plan. Additionally, for one DLA contract we reviewed with an individual subcontracting plan and contract award amount of about $18 million, while we found documentation indicating that the contract had been approved, DLA could not provide documentation for a DOD requirement related to a socioeconomic subcontracting goal. Specifically, the subcontracting plan for this contract listed the small disadvantaged business goal at less than 1 percent. According to Defense Federal Acquisition Regulations Supplement § 219.705-4, a small disadvantaged business goal of less than 5 percent must be approved one level above the contracting officer. In our review of this contract, DLA could not provide documentation specifically showing a higher-level approval for the goal of less than 1 percent. DLA provided an interoffice record and a signed price negotiation memorandum approval document, but these documents did not reference the small disadvantaged business subcontracting goal of less than 1 percent. As a result, we were unable to determine that this subcontracting goal was approved at the appropriate level. Subcontracting Possibilities Determination Not Properly Documented In addition to the 26 contracts with subcontracting plans, we also reviewed another six contracts that initially appeared to require a subcontracting plan (based on data in FPDS-NG) but did not have one. For three of the six contracts, the contracting officer or relevant official did not document why a subcontracting plan had no subcontracting possibilities, or prepared the required documentation years after the contract award. For contracts over $700,000, the FAR generally requires contracting officers to award the contract with a subcontracting plan or to make a determination that no subcontracting possibilities exist. If the contracting officer determines that there are no subcontracting possibilities, the determination should include a detailed rationale, be approved at one level above the contracting officer, and be in the contract file. GSA accounted for one of the three contracts and NASA for the remaining two. A subcontracting plan was not included in a GSA construction contract with an award amount of about $7 million (which met requirements for a small business subcontracting plan based on the award amount and type of contract). GSA did not have any documentation and could not tell us why the contract did not require a subcontracting plan or had no subcontracting possibilities, or why a subcontracting plan was not included in the contract. Specifically, GSA provided a response explaining the agency did not have documentation to support why the contracting officer (who is no longer with the specific contracting center that awarded the contract) determined there were no subcontracting possibilities. For two NASA contracts, NASA officials provided documentation signed by one level above the contracting officer, but the documentation was prepared years after the contract award. For the first contract, with an award value of almost $8 million and awarded in March 2016, the determination providing the rationale for no subcontracting possibilities was created and signed in March 2019, about 3 years after the contract was awarded instead of when the award was made. For the second NASA contract, awarded in September 2017 with a contract award amount of about $2 million, NASA officials explained that in 2017, the initial procurement was estimated at a dollar amount below the threshold for a subcontracting plan and therefore no subcontracting plan was required in the solicitation. The contract value was later changed to add two option periods, which put the estimate over the subcontracting plan threshold. NASA officials said the contracting officer’s documentation to determine the need for a subcontracting plan was inadvertently omitted from the file. As a result of our document request, the reviewing contracting officer noted that the file did not properly address the issue of the increased estimate relative to subcontracting plan requirements. NASA then conducted a review to determine if the award met the requirements for a subcontracting plan or if it would have been waived in 2017. Based on the recent review, NASA officials determined that a requirement for a subcontracting plan would have been waived in 2017 based on, among other factors, the specific product purchased through the contract and the structure of the contract, and they prepared a memorandum (in July 2019) documenting this review and conclusion. A 2018 DOD OIG report on small business subcontracting at two Army contracting command locations found similar issues. Specifically, the report found that of 50 contracts the DOD OIG reviewed, the two contracting command locations awarded six contracts, valued at $330.7 million, without a subcontracting plan or a contracting officer’s determination that no subcontracting possibilities existed. The three other contracts we reviewed—two at DLA and one at GSA— had appropriate documentation directly explaining or a rationale supporting why no subcontracting plan was in place. For example, for one contract, DLA officials provided a memorandum signed at one level above the contracting officer that documented the specific nature of the contract for a particular type of metal, the work required, and ability of the contractor to perform the work in-house. For the second contract, DLA officials provided information that the contract was awarded through the AbilityOne Program—which does not require a subcontracting plan. The GSA contract was an automotive contract in which the vendor initially represented itself as a large business and had submitted a subcontracting plan. However, after the contract award, GSA documented a modification to the contract that reclassified the vendor as a small business, based on size standards for the North American Industry Classification System codes for the specific acquisition. Therefore, the subcontracting plan was no longer required. Agencies Could Not Demonstrate They Followed Procedures Related to PCR Reviews in Half of the Contracts We Reviewed For half of the contracts we reviewed with a small business subcontracting plan (individual or commercial), the agencies could not demonstrate that procedures related to PCR reviews were followed for one or more contracts. According to FAR § 19.705-5(a)(3), when an agency is making a contract award that includes a subcontracting plan, contracting officers should notify the appropriate PCR of the opportunity to review the proposed contract, including the associated subcontracting plan and supporting documentation. More specifically, for 12 of 24 contracts we reviewed with an individual or commercial subcontracting plan, the agencies could not provide documentation or we were unable to determine from the documentation provided whether the contracting officer gave the SBA PCR a review opportunity and whether the PCR may have conducted a review. Of these 12 contracts, DLA and Navy accounted for 10, while GSA and NASA accounted for one each. Five of the six DLA contracts we reviewed did not have any documentation or lacked sufficient documentation to determine if the contracting officer or other official provided the PCR with an opportunity to review the contract, and whether a PCR review occurred. More specifically, DLA was unable to provide any documentation related to the PCR review process for three contracts with a subcontracting plan and told us they could not locate such documentation in the contract file. For one of these three contracts, DLA referred us to DCMA for additional documentation, but the documentation DCMA provided did not confirm whether the PCR had an opportunity to review the contract. For the remaining two of five contracts, DLA provided documentation, including a review by DCMA’s Small Business Office for one of the contracts, but this documentation did not demonstrate the contract was provided to an SBA PCR for review. Five of six Navy contracts we reviewed that had individual subcontracting plans also lacked this documentation. Specifically, Navy was unable to provide documentation specific to the PCR review process for three contracts. For two other contracts, Navy provided documentation of various internal reviews. For example, Navy provided a checklist for one contract showing that the contract was reviewed and signed by the contracting officer and a small business specialist. However, the section of the checklist where the PCR would sign indicating review of the contract and subcontracting plan was left blank. For the other contract, Navy provided documentation that an Assistant Deputy Director for the procuring contracting command center had reviewed and signed the subcontracting plan, but the PCR signature field was blank. In both cases, no other documentation indicated whether the contract was sent to the PCR for review. Therefore, we were unable to determine if a PCR reviewed the plan or was provided the opportunity to review the plan. GSA and NASA each had one contract (of the six we reviewed for each) for which they could not provide any documentation related to the PCR review process. Both of these contracts had an individual subcontracting plan. For the remaining 12 contracts across the four agencies, the agencies provided documentation demonstrating that the PCR was given the opportunity to and had reviewed the contract and associated subcontracting plan. For these contracts, we obtained documentation such as a memorandum, checklist, or email showing the PCR had reviewed and provided concurrence with the subcontracting plan, or commented on the proposed goals in the plan. According to officials from three of the four agencies we reviewed, contracting officers have a large workload with responsibility for a large number of processes and reviews, which may result in a specific process or task—such as coordinating the PCR review—being missed. Additionally, according to NASA officials, the PCR review process may occur but not be documented for some NASA contracts. Most of the Contracts We Reviewed Had Limited Post-Award Oversight of Compliance with Subcontracting Plans The selected agencies provide some training to contracting officers on monitoring subcontracting plans. But, for most of the 26 contracts we reviewed with a subcontracting plan, contracting officers did not ensure contractors met their subcontracting reporting requirements. Contracting officers also accepted subcontracting report submissions with erroneous subcontracting goal information for several contracts. For more than half of the 26 contracts, contractors reported that they met or were meeting their small business subcontracting goal. Agencies Provide Some Training to Contracting Officers on Subcontracting Plans Officials from all four agencies told us that they provide periodic training to contracting officers related to monitoring subcontracting plans, as illustrated in the following examples: NASA: According to a NASA official, NASA conducted training at the Kennedy Space Center in October 2018 and October 2019 that focused on whether contracting officers should accept or reject an ISR, and how to assign a Compliance Performance Assessment Report rating. The agency also conducted training at the Goddard Space Flight Center in October 2018. GSA: GSA’s Office of Small Business Utilization provided a refresher on eSRS reporting, including how to review the report in eSRS, for contracting officers in May 2018. They also provided training to contracting officers in October 2019 on reviewing ISRs and SSRs, including understanding how to review an ISR and ensuring timely submissions of SSRs. DLA: According to DLA staff with the DLA Contracting Services Office, when a contract requires a subcontracting plan, the office’s eSRS coordinator recommends that contracting personnel responsible for administering subcontracting plans take the Defense Acquisition University online course about eSRS. Navy: According to a Navy official, DOD has conducted extensive training to address eSRS known issues and data collection and guidance on the proper review of ISRs. Additionally, Navy contracting officers can enroll in a 5-day course on subcontracting offered by the Defense Acquisition University. According to Defense Acquisition University staff, in addition to the 5-day classroom course, the university also offers other training online related to subcontracting. Contracting Officers Did Not Ensure Contractors Met Their Reporting Requirements for Many Contracts We Reviewed For more than half of the 26 contracts we reviewed with a subcontracting plan, agency contracting officers did not ensure contractors met their reporting requirements. Specifically, 14 of 26 contracts with subcontracting plans did not have all required ISR or SSR submissions. Three of the four agencies—DLA, NASA, and Navy—accounted for the 14 contracts without all the required submissions. For the remaining 12 contracts we reviewed, the agencies provided documentation showing that contractors submitted all required ISR or SSR submissions for these contracts. FAR § 19.705-6(f) requires contracting officers to monitor the prime contractor’s compliance with subcontracting plans to ensure that subcontracting reports (ISRs and, where applicable, SSRs) are submitted in eSRS in the required time frames. The contracting officer is also to review the reports in the required time frames, acknowledge receipt of, and accept or reject the reports. Limited Monitoring of Contractor Report Submissions Our review of 26 contracts with subcontracting plans found limited monitoring of contractor report submissions. Specifically, we found the following for each agency (see table 2): DLA. Five of the six DLA contracts we reviewed did not have all of the required ISR or SSR contractor submissions. For example, for a $6.6 million contract, with a commercial subcontracting plan that was awarded in fiscal year 2016, we could not locate any SSRs in eSRS. Based on limited documentation DLA provided, the contractor submitted only one SSR for the duration of the contract and did so by email to the contracting officer in November 2018. This document was not an official SSR and it did not include required information such as the vendor’s number, information on who submitted the report from the contractor, a self-certification statement attesting to the accuracy of the report, or acceptance or sign off by a DLA official. Four other DLA contracts with individual subcontracting plans had multiple missing submissions. For two of these contracts, the agency could not explain why the reports were missing, and for the other two contracts, the contractors were not aware of the SSR reporting requirement, according to a DLA official. NASA. Similar to DLA, five of the six NASA contracts we reviewed did not have all of the required ISR or SSR submissions. For example, for a $4.6 million contract with an individual subcontracting plan awarded in fiscal year 2016, the contractor submitted ISRs for 2016 and 2017 and the SSR for 2016. However, according to information we reviewed in eSRS and a NASA official, the contractor did not submit any ISRs for 2018 and 2019, and did not submit any SSRs for 2017 or 2018. The official stated that there was contracting officer turnover during this contract, and the contracting officer monitoring the contract at the time of our review could not find any documented explanation for the reports not being submitted. The same agency official explained that for another contract, the contractor experienced issues submitting documents in the electronic system initially and that there were personnel changes around the time the missing report was due. Additionally, for another contract awarded in 2017 for $3.8 million, the contractor did not submit any SSRs. We discuss the two remaining NASA contracts in our discussion of contracts with subcontracting report submissions that were submitted well past their due dates. Navy. Four of the eight Navy contracts we reviewed did not have all the required report submissions. For example, for one contract awarded for $16.6 million, the contractor submitted the first two required ISRs and an SSR for fiscal year 2016, the year in which the contract was awarded. However, we did not locate any other required submissions in eSRS for subcontracting activity in fiscal year 2017, the year in which the contract ended. A Navy official told us it is not unusual for information related to monitoring and compliance of subcontracting plans to be missing from the contract files. Three remaining contracts with individual subcontracting plans also had missing SSRs. However, the agency did not explain why these submissions were missing. GSA. The six GSA contracts all had the required report submissions. Additionally, contractors submitted ISRs or SSRs well past their required due dates for at least four contracts. For example, for one Navy contract and one DLA contract, we found that the contractors submitted an ISR more than 125 days late, and almost 50 days late, respectively. For two NASA contracts, contractors submitted reports after they were due. For one of these NASA contracts, we found that the March 2016 and September 2016 ISRs were submitted well past their due dates—more than 400 days and more than 150 days, respectively. For the second NASA contract, the contractor did not submit any of the required reports during the life of the contract and only submitted one final ISR when the contract ended. This contract was awarded in fiscal year 2016 and ended in August 2018. According to a NASA official, failure to submit the required subcontract report was an error by the contractor and insufficient contracting officer oversight. Additionally, the contractor did not submit any SSRs for this contract as required by the FAR. In another four instances, contractors began submitting the required reports (ISRs and SSRs) after we inquired about the specific contracts with the respective agencies. For example, the contractor for one NASA contract, which also had some missing subcontracting reports, submitted its 2017 SSR more than 600 days after it was due, and after we inquired with NASA about the SSR. We also found that while contractors for two DLA contracts submitted the required ISRs, they did not submit the required SSRs. In one of these two instances, an agency official told us that the contracting officer was unaware of the need for the contractor to submit both an ISR and SSR, and did not inform the contractor of this requirement. For this contract, which was awarded in fiscal year 2017, the contractor submitted its first SSR in October 2019, after we inquired with DLA officials about the lack of SSR submissions. For the second of these two contracts, which also was awarded in fiscal year 2017, the contractor informed the agency that they had not submitted SSR reports in the past because they were unaware of this requirement, and did not submit an SSR until October 2019. Finally, for one other DLA contract, the only ISR we found in eSRS was submitted by the contractor in October 2019, after we inquired about the ISR and more than 2 years after the contract was awarded. This contractor submitted reports outside of eSRS for two of the four prior reporting periods. These reports did not have acceptance or sign off by the accepting DLA official. In addition, while a DCMA staff member told us that the contractor did not submit its September 2017 and March 2018 ISR reports, the staff member did not provide an explanation why these reports were not submitted. Reviews Selected Agencies Conducted Also Found Limited Monitoring of Contractor Report Submissions Additionally, officials from all four agencies told us they conduct some type of periodic review related to oversight of subcontracting plans, which can include determining compliance with the subcontracting plan and related reporting requirements. In some of these reviews, the agencies had similar findings to ours. For example, NASA: According to an agency official, NASA’s Office of Small Business Programs conducts procurement management reviews of subcontracting plans every 2–3 years. The official told us that these reviews serve to monitor whether (1) prime contractors submitted the required ISRs and (2) contracting officers assessed the subcontracting plans and reviewed the ISRs, among other things. The results of a review conducted in May 2017 identified missing ISRs and reports that were accepted with incomplete information. Navy: According to a Navy official, the Navy Office of Small Business Programs conducts Procurement Performance Management Assessment Program reviews. The official stated that these reviews are conducted every 3 years at each of Navy’s command centers that conduct buying activities. If a command center receives an unsatisfactory or marginal rating, then the Deputy Assistant Secretary of the Navy for Acquisition and Procurement will perform follow-up reviews every 6–12 months until the issues are addressed. As part of the review process, Navy reviews subcontracting plans and data in eSRS to determine how subcontracting plans are monitored and evaluated. A review conducted in June 2018 concluded that monitoring of prime contractor’s subcontract reporting and compliance was inadequate. GSA: According to agency officials, GSA’s Office of Small Business Utilization, in conjunction with GSA’s Procurement Management Review team, conducts Small Business Compliance Reviews. Annually, the agency selects 4–6 regions from which to select a sample of contracts to review for both pre-award and post-award compliance. According to agency officials, these reviews are designed to help determine if subcontracting goals were met, among other subcontracting-related requirements. A review GSA conducted in March 2019 for one contract noted that the subcontracting plan could not be located in the contract file and that there was a lack of post- award subcontracting plan oversight, including contractor reports on subcontracting activities. DLA: According to a DLA official, various DLA offices, including the DOD Office of Small Business Programs, monitor eSRS regularly to ensure contracting officers are reviewing and processing contractor submissions through the system. The official stated that these reviews happen at various times throughout the year. For example, the Small Business Director at DLA Distribution—an organization within DLA— checks eSRS on a biweekly basis and DLA Aviation—another organization within DLA—conducts semi-annual reviews of eSRS. The DOD OIG had similar findings regarding oversight of contractor compliance with subcontracting plan requirements, including contractor reporting requirements. For example, in 2018 the DOD OIG reported that contracting officers at two Army contracting commands did not monitor prime contractors’ compliance with subcontracting plans. The DOD OIG made three recommendations to address the findings, which have been implemented according to the DOD OIG. As previously mentioned, contracting officers are responsible for a large number of processes and reviews, which may result in a specific process or task being missed. According to officials from Navy and NASA, other factors also contributed to the existence of limited documentation for certain post-award requirements for the contracts we reviewed. For example, the agency officials stated that contracting officers focus more on the award process than on contract administration and fail to properly consider the requirement that subcontracting plans become a material part of the contract on award, resulting in a lack of due diligence after the award. Officials from NASA and Navy also cited eSRS not providing notifications to contracting officers and contractors when reports are not submitted, among other things, as a contributing factor in missing ISR reports. Additionally, according to NASA officials, eSRS does not generate a list of prime contractors who are delinquent in submitting their SSRs. Contracting Officers Accepted Several Subcontracting Report Submissions with Erroneous Information For the 26 contracts we reviewed with a subcontracting plan, contracting officers accepted several report submissions containing incorrect information about subcontracting goals. According to FAR § 19.705-6(j), after a contract containing a subcontracting plan is awarded, the contracting officer must reject a contractor’s subcontracting report submission if it is not properly completed—for example, if it has errors, omissions, or incomplete data. In fulfilling their responsibilities related to FAR § 19.705-6(j), contracting officers can identify omissions that a contractor may need to address. For example, in reviews of ISRs for a $31.8 million Navy contract awarded in fiscal year 2017, the contracting officer noted concerns about the contractor not meeting its socioeconomic goals and asked the contractor to provide an explanation for why the goal was not being met. The contracting officer rejected the September 2018 ISR and later rejected the September 2019 ISR twice because the contractor either did not provide an explanation for not meeting certain socioeconomic goals or failed to describe good-faith efforts to do so. The contractor submitted a revised ISR in December 2019, which included a description of its good-faith efforts to meet the socioeconomic goals. Upon review, the contracting officer accepted the submission stating that it seemed clear from the information provided that the contractor put forth a good-faith effort to meet the goals. However, for the 21 contracts we reviewed in total that required contractor ISR submissions (which provide information on approved subcontracting goals and achievements towards them), we found that for nine contracts, the contracting officers accepted one or more submissions with errors or unexplained conflicting information related to subcontracting plan goals (see table 3). Specifically, all nine contracts lacked explanations of the discrepancies in the ISR or other documentation we reviewed. We discuss the nine contracts in more detail below: NASA: Contracting officers accepted multiple ISRs with errors or unexplained conflicting information for three NASA contracts. In one of the three contracts, awarded in fiscal year 2017 for $3.8 million, the contractor combined small business subcontracting goals (listed as whole dollars and percent of total subcontracting dollars) from two different subcontracting plans associated with the contract into one ISR. However, the dollar amount reported in the ISR as the subcontracting goal—about $177,000—reflected the small business goal from only one of the subcontracting plans, rather than the two subcontracting plans, which would have been a total of about $309,000. As a result, the actual percentage of subcontracting to small businesses of total subcontracting and of the total amount of the contract value was incorrect. In the second contract, awarded in 2016 for $4.6 million with a planned small business subcontracting total of about $2 million, the contractor listed an overall small business subcontracting goal different from the approved subcontracting goal in three ISRs, and there was no documentation explaining the difference. For the third contract, awarded in fiscal year 2016 for $45.2 million with a planned small business subcontracting goal of 10 percent of total subcontracting dollars, the contractor listed this goal incorrectly in two ISRs. According to a NASA official, at the time of our review, the contracting officer was working with the contractor to correct the error. DLA: For one contract awarded in 2017 for $34.1 million with a planned subcontracting total of about $11 million, a DLA contracting officer accepted a September 2019 ISR that listed the small business goal at 90 percent of the total subcontracting dollars for the contract instead of the 87.4 percent (base) or 87.6 percent (option years) in the contract addendum. The actual cumulative subcontracting percentage reported in the ISR was 88.1 percent, which met the goal in the addendum, but not the 90 percent goal in the accepted September 2019 ISR. We could not identify any information in the ISR explaining the conflicting information. Additionally, when calculating the amount of cumulative dollars awarded to small business concerns, the contractor appeared to have excluded about $54,000 in subcontracting, which was included in a separate line item in the ISR for women-owned small business concerns. As a result, we were unable to determine whether this contractor had been meeting its small business goal. For a second contract also awarded in 2017 for $74.9 million with a planned subcontracting total of about $23 million, the contractor reported the approved small business goal of 96 percent of total subcontracting dollars in the March 2018 and September 2018 reports. However, in March 2019 and September 2019 ISR submissions for this contract, the contractor reported a small business goal of 98.5 percent and 74.8 percent, respectively. We found no documentation explaining why the contractor reported goals in the 2019 ISRs that were different from the approved 96 percent goal. Navy: For one Navy contract, which was awarded for $13.5 million in fiscal year 2018 with a planned subcontracting total of $2.7 million, the contracting officer notified the contractor in the September 2018 and March 2019 ISRs that the small disadvantaged business goal of 0 percent of total subcontracting dollars in these submissions did not match the 25 percent goal in the approved subcontracting plan. The contractor corrected the error and the contracting officer accepted the revised reports. In the September 2019 submission, the contractor once again reported that particular goal as 0 percent, but the contracting officer did not note the recurring error in this submission. For another contract, awarded for $16.6 million in fiscal year 2016 with a planned subcontracting total of about $5.9 million, the March 2016 ISR listed a small business goal of 693 percent (the goal in the approved subcontracting plan was 69.3 percent) of total subcontracting dollars. The contracting officer did not address the incorrect percentage. Moreover, in the September 2016 submission, the goal was reduced to 61.8 percent, which was less than the goal in the approved subcontracting plan. There was no explanation for the discrepancies in either submission. GSA: For one GSA Public Building Service contract, which was awarded in fiscal year 2018 for $7.5 million, we found discrepancies between the goals listed in multiple accepted ISRs and the approved subcontracting plan. This contract involved janitorial services performed at two locations. Each location had a different approved small business goal—96 percent and 87 percent of total subcontracting dollars. However, the contractor reported only one small business goal in the three ISRs submitted for September 2018, March 2019, and September 2019, and this reported goal varied from 89 to 97 percent in the three ISRs. According to a GSA official, the contractor submitted one ISR in each reporting period to convey the combined progress toward meeting its subcontracting goals for both locations, but the small business goal the contractor reported in each ISR did not accurately reflect the combined goals for both locations. The GSA official told us the combined goal the contractor should have reported for this contract was about 91 percent. According to the GSA official, these submissions contained data entry errors by the contractor, perhaps due to the contractor not knowing how to properly report its subcontracting data. For one GSA Federal Acquisition Service contract awarded in fiscal year 2017 for $3.6 million, we found a discrepancy between the small business goal reported in multiple ISR submissions—5 percent of total subcontracting dollars—and the 25 percent goal of total subcontracting dollars in the approved subcontracting plan, and we notified the agency of the discrepancy. However, none of these submissions included an explanation for the discrepancy and the agency’s reviewing official accepted the submissions without addressing the conflicting information. We also found one instance involving unclear oversight responsibilities among the 26 contracts we reviewed. We were unable to determine which agency actively monitored one DLA contract, which was awarded in fiscal year 2017 for $23.3 million. According to DLA staff, DCMA is responsible for monitoring, evaluating, and documenting performance of the contractor for the associated small business subcontracting plan. However, DCMA officials provided responses that DLA is the entity that should be conducting oversight of the subcontracting plan. If oversight responsibility of contracts involving two agencies is not apparent, it is unlikely that the contractor’s compliance with their subcontracting plans is being properly monitored. According to agency officials, several factors contributed to contracting officers accepting subcontracting reports with erroneous information. For example, as previously stated, agency officials told us that contracting officers’ large workload and focus on the award process (rather than on contract administration) can contribute to not always considering subcontracting plans as material parts of contracts and, thus, not conducting related due diligence after the contract award. GSA officials also noted that contracting officers may not have read or understood FAR requirements for oversight of contracts. Contractors Reported They Met or Were Meeting Their Small Business Subcontracting Goal For 16 of the 26 contracts we reviewed with a subcontracting plan, contractors reported that they met their small business subcontracting goal or were meeting the goal in situations where the contract had not yet ended. For the remaining 10 contracts, three ended without the contractor meeting the small business goal, five were not meeting the small business goal but the contract had not yet ended, and two had limited documentation available and we were unable to determine whether the goal was met. For the three contracts that ended without the contractor meeting the small business goal, two contracts had documentation that included a rationale for why the goal was not met. For one NASA contract, the contracting officer documented in a memorandum that a decision was made that there was no longer any subcontracting possibilities. The other instance involved a GSA Federal Acquisition Service contract, in which the assessing official documented in the final Compliance Performance Assessment Report that the low goal achievement was due to the nature of the automotive manufacturing industry. We could not identify a rationale for one Navy contract for why the small business subcontracting goal was not met and the agency could not provide documentation explaining why the goal was not met. The FAR requires contracting officers to assess liquidated damages against a contractor if a contracting officer determined the contractor failed to make a good-faith effort to comply with the subcontracting plan. However, a contractor’s failure to meet its subcontracting plan goals does not, in and of itself, constitute a failure to make a good-faith effort. Of the three contracts we reviewed that did not meet their small business subcontracting goal, we found no instances in which a contracting officer pursued liquidated damages or other actions against a contractor. As previously mentioned, two of these three contracts had a documented rationale for not meeting the small business subcontracting goal. Agency officials told us that contracting officers rely on Compliance Performance Assessment Reports or other performance assessment measures to rate a contractor’s performance relative to their subcontracting goals. Officials from three of the four agencies also told us a contractor’s past performance could affect their future ability to obtain government contracts, which can incentivize contractors to take steps to meet their subcontracting goals. SBA Conducts Training and Reviews for Its Subcontracting Program, but Has Very Limited Documentation of Recent Reviews SBA provides training to federal agencies’ contracting officers and contractors to assist in complying with small business subcontracting plan requirements. As part of its Small Business Subcontracting Program, SBA conducts certain reviews to assess overall effectiveness of small business subcontracting, including compliance reviews that are designed to assess contractor compliance with small business subcontracting plans. However, SBA could only provide limited documentation on compliance reviews it conducted from fiscal years 2016 through 2018, and limited information on compliance reviews conducted in fiscal year 2019. SBA Provides Training to Agencies and Conducts Certain Reviews of Its Small Business Subcontracting Program SBA provides training for contracting officers yearly to assist them in their reviews of subcontracting plans, including training related to pre-and post- award subcontracting activities for contracting officers. Beginning in 2017, SBA made available annual training for contracting officers to assist them in reviewing subcontracting plans. SBA also provides training to contractors, which provides them with information on meeting subcontracting plan requirements. If a prime contractor receives a less than satisfactory rating on a compliance review, the prime contractor must attend a mandatory training to address the issues found in the initial rating. According to SBA officials, the agency also has been developing new electronic-based training to coincide with new compliance review processes. According to the officials, the training is intended to educate prime business contractors with a subcontracting plan and federal agencies awarding contracts with a subcontracting plan on how to comply with post-award subcontract program requirements. SBA plans to make this training available in July 2020 in an electronic format that will provide information and require the participant to answer a series of questions to ensure they comprehend and retain the information. In addition to providing training, SBA’s CMRs conduct reviews related to SBA’s Small Business Subcontracting Program. In particular, SBA’s Standard Operating Procedure (SOP) 60 03 6, which was effective from December 4, 2006 through July 17, 2018, identified CMR responsibilities and included guidance for conducting reviews related to the Small Business Subcontracting Program. According to this SOP, CMRs were to conduct different types of reviews: In Performance Reviews (also referred to as desk reviews), CMRs were to review ISRs and SSRs that contractors submitted to determine which large business contractors in their portfolios they should visit, and what type of compliance review would be most effective. In Small Business Program Compliance Reviews (compliance reviews), CMRs were to evaluate a contractor’s compliance with subcontracting program procedures and goals in a contractor’s small business subcontracting plan. CMRs also were to conduct follow-up compliance reviews on areas found deficient during a compliance review or previous follow-up review. SOP 60 03 6 also described some orientation or outreach activities as reviews. In Subcontracting Orientation and Assistance Reviews, CMRs were to visit a large business contractor’s facility or telephone the contractor to introduce them to the Small Business Subcontracting Program and provide an overview of the roles and responsibilities of a prime contractor. According to SBA, the agency conducted 417 of these reviews from fiscal years 2016–2018. According to SBA, the agency’s CMRs conducted hundreds of various reviews in fiscal years 2016 through 2018, and a total of 118 compliance reviews specifically during that period (see table 4). SBA staff said SBA also conducts surveillance reviews to evaluate the overall effectiveness of an agency procurement center’s small business program by reviewing contract files and procedures. According to SBA documentation, these reviews allow SBA to recommend changes to improve small business participation at procurement centers. A surveillance review also examines the procurement center’s subcontracting program. SBA staff examine subcontracting files to determine if procurement center staff routinely perform subcontracting plan reviews, route the subcontracting plans to the PCR for review during the contract award process, incorporate approved subcontracting plans into contracts, and ensure that prime contractors submit the subcontracting plan ISRs into eSRS. For example, in a 2019 surveillance review (for which we obtained a copy) SBA found the center that conducted the procurements did not have a subcontracting plan in the file for two contracts and the subcontracting plan was not sent to the appropriate SBA Area Director for four contracts. In July 2018, SBA issued a new SOP entitled Subcontracting Assistance Program Post Award, which revised SBA’s compliance review process. According to SBA officials and a high-level outline SBA provided, SBA intends to have the following three phases for the new review processes that will implement the new SOP: 1. Subcontract Reporting Compliance – In this phase, CMRs are to review and rate a prime contractor’s compliance with subcontracting reporting requirements (that is, the contractor’s ISR and SSR reporting requirements). According to SBA officials, SBA also intends to inform contract awarding and administering agencies of their findings. 2. Subcontracting Plan Goal Attainment Compliance – In this phase, CMRs are to review whether a prime contractor has met or is on track to meet the goals listed in the subcontracting plan. 3. Subcontract Regulation Compliance – In this phase, CMRs are to review the prime contractor’s actions in adhering to all the elements in the subcontracting plan and meeting subcontracting plan goals, among other related actions. According to SBA officials, the new compliance review process is intended to standardize compliance reviews based on the new SOP. SBA developed a broad outline of the three-phase compliance review process, and to implement this process, developed a CMR portfolio tracking document, in the form of a spreadsheet, and a draft compliance review guidance document, both of which SBA is currently using for the first phase of the process. However, SBA officials told us they could not provide detailed procedures for implementing the second and third phases and they continue to refine the compliance review spreadsheet in conjunction with the compliance review guidance. As of mid-March 2020, they stated that they intend to complete phase 2 guidance by July 30, 2020, and phase 3 guidance by October 30, 2020. SBA Has Very Limited Documentation of Fiscal Year 2016–2018 Compliance Reviews and Documentation for 2019 Is Not Clear SBA could not provide us with requested information and almost no documentation on the compliance reviews its CMRs conducted in fiscal years 2016–2018. SBA could not provide basic information such as the list of contractors reviewed, the specific type of compliance reviews (such as reviews conducted individually or conducted jointly with another agency), which agencies may have assisted in the reviews (in the case of any joint reviews), and contractor ratings resulting from the reviews. SBA could only provide one CMR compliance review and two follow-up compliance reviews for this time frame, and all three were conducted in fiscal year 2017. The one CMR compliance review SBA provided included general observations from the review, specific findings, follow-up actions required, best practices for the contractor, and the rating provided to the contractor. The follow-up compliance reviews from fiscal year 2017 identified steps that contractors took to address deficiencies found in the initial compliance review and steps to enhance their subcontracting program. According to SBA officials, the agency’s CMRs conducted 680 compliance reviews in fiscal year 2019 and SBA was able to provide some documentation related to these reviews. To conduct these reviews, SBA officials explained that they selected about 4,000 prime contracts from FPDS-NG with individual subcontracting plans that ended in fiscal year 2019 or later. From these approximately 4,000 contracts, SBA officials told us that CMRs randomly selected 680 for review during fiscal year 2019. The CMRs assessed the selected sample of contracts against the first phase of the new compliance review process—the extent to which contractors complied with their reporting requirements. In our review of the documentation SBA provided, we could not clearly identify how many reviews they conducted. For example, the summary information from the reviews was not documented or maintained in a single document, but was in multiple spreadsheets with some inconsistencies, making it difficult to determine how reviews were counted. Additionally, one spreadsheet contained a summary tab for many contracts, but a count of the unique contracts did not add up to 680. Other spreadsheets did not have a summary tab, and contained information on the reviewed contracts in tabs organized by contractor. According to its latest SOP, SBA conducts compliance reviews to determine whether prime contractors that are not small businesses complied with their post-award subcontracting responsibilities outlined in the subcontracting plan to ensure small business subcontracts are being properly awarded and reported. However, based on our review of the limited documentation provided, SBA lacks specific guidance in its SOP on how CMRs should maintain information for compliance reviews they conduct. SBA has draft guidance on the new compliance review process, including some specific information regarding what CMRs are to record as part of the compliance review. However, SBA does not have clearly documented and maintained records on the first phase of these compliance reviews. Conclusions Requirements for small business subcontracting plans in certain contracts enhance opportunities for small businesses to participate in federal contracting. However, weaknesses in selected agencies’ oversight of subcontracting plans—such as not following all procedures and not reviewing contractor submissions for errors or omissions—can reduce those opportunities and limit agencies’ knowledge about the extent to which contractors fulfill obligations to small businesses. The frequency with which issues arose in our sample suggests agencies can do more to improve oversight. For contracts we reviewed which used checklists or memorandums to document the PCR review process, we found that those contracts generally demonstrated compliance with the requirement for the opportunity for a PCR review. Taking steps to ensure that contracting officers provide PCRs the opportunity to review contracts with subcontracting plans would help agencies identify subcontracting opportunities and benefit from suggestions for increasing small business participation. In turn, such efforts could help agencies achieve their small business subcontracting goals. Similarly, improved monitoring of submitted contractor reports on subcontracting activities would identify errors in the submissions and increase agencies’ ability to assess contractor performance. Without complete and accurate information on a contractor’s subcontracting goals, agencies cannot adequately assess a contractor’s performance in meeting its subcontracting plan responsibilities. Given the many responsibilities of contracting officers, steps to ensure that contractor report submissions on meeting subcontracting goals are accurate would assist agencies’ oversight efforts. SBA also has opportunities to significantly enhance oversight related to its subcontracting program. It lacks documentation for almost all compliance reviews conducted in three of the four fiscal years from 2016 through 2019, has not fully implemented revisions to the compliance review process, and has not yet developed procedures for ensuring clear and consistent records of all compliance reviews are documented and maintained. By having clear and consistent documentation for compliance reviews and maintaining those records, SBA would better position itself to track contractor compliance for contracts it reviews and would be able to use this information to inform subsequent reviews. Additionally, contracting agencies would be able to leverage the information from SBA for their own reviews of contractor performance and subcontracting plans. Recommendations for Agency Action We are making a total of 10 recommendations to five agencies (three to DLA, one to GSA, two to NASA, three to Navy, and one to SBA): The Director of DLA should include a step for the opportunity for PCR review of the proposed contract and subcontracting plan in agency procedures and memorandums, and develop a mechanism for documenting whether the opportunity for PCR review was provided. (Recommendation 1) The Secretary of the Navy should include a step for the opportunity for PCR review of the proposed contract and subcontracting plan in agency procedures and memorandums, and develop a mechanism for documenting whether the opportunity for PCR review was provided. (Recommendation 2) The Director of DLA should take steps to fulfill the requirement that contracting officers ensure that subcontracting reports are submitted by contractors in a timely manner. For example, the agency could require contracting officers to verify that prior reports were submitted when reviewing current submissions. (Recommendation 3) The NASA Administrator should take steps to fulfill the requirement that contracting officers ensure that subcontracting reports are submitted by contractors in a timely manner. For example, the agency could require contracting officers to verify that prior reports were submitted when reviewing current submissions. (Recommendation 4) The Secretary of the Navy should take steps to fulfill the requirement that contracting officers ensure that subcontracting reports are submitted by contractors in a timely manner. For example, the agency could require contracting officers to verify that prior reports were submitted when reviewing current submissions. (Recommendation 5) The Director of DLA should take steps to ensure contracting officers compare subcontracting goals in contractor report submissions to goals in the approved subcontracting plan and address any discrepancies. (Recommendation 6) The Administrator of the GSA should take steps to ensure contracting officers compare subcontracting goals in contractor report submissions to goals in the approved subcontracting plan and address any discrepancies. (Recommendation 7) The NASA Administrator should take steps to ensure contracting officers compare subcontracting goals in contractor report submissions to goals in the approved subcontracting plan and address any discrepancies. (Recommendation 8) The Secretary of the Navy should take steps to ensure contracting officers compare subcontracting goals in contractor report submissions to goals in the approved subcontracting plan and address any discrepancies. (Recommendation 9) The SBA Administrator should ensure Commercial Market Representatives clearly and consistently document compliance reviews and maintain these records. (Recommendation 10) Agency Comments and Our Evaluation We provided a draft of this report to DOD, GSA, NASA, and SBA for review and comment. DOD provided a written response, reproduced in appendix II, in which it concurred with our recommendations. DOD described steps that DLA and Navy intend to take to address the recommendations, including actions to remind contracting officers or to provide additional guidance related to giving the PCR an opportunity to review the proposed contract and subcontracting plan. DOD also described actions that DLA and Navy intend to take to remind contracting officers of the requirement to ensure that subcontracting reports are submitted in a timely manner and to remind contracting officers to compare subcontracting goals in contractor report submissions to goals in the approved subcontracting plan and address any discrepancies. GSA provided a written response, reproduced in appendix III, in which it concurred with our recommendation. NASA provided a written response, reproduced in appendix IV, in which it concurred with our recommendations. NASA described steps it intends to take, such as requiring procurement offices to monitor contracting officer reviews of contractor report submissions and comparisons of subcontracting goals for consistency with the subcontracting plan. NASA also provided technical comments on the draft report that we incorporated where appropriate. SBA provided a written response, reproduced in appendix V, in which the agency partially concurred with our recommendation. SBA also asked us to consider rewording a few statements that it considered to have appeared for the first time in the draft report. In the draft report we sent to SBA, we provided additional information about how we could not clearly identify how many reviews the CMRs conducted. SBA stated in its written response that it has comprehensive documents and records for fiscal year 2019 compliance reviews and while its CMRs maintain a separate workbook of spreadsheets for reviews they conduct, the agency maintains a summary document that combines the compliance reviews performed collectively by its CMRs. During our audit and as part of its written response to our draft report, SBA did not provide a summary document that showed all reviews conducted by its CMRs for fiscal year 2019. SBA also acknowledged in its written response that it could not provide requested documentation for compliance reviews conducted during fiscal years 2016 through 2018. SBA stated it has developed detailed procedures for maintaining consistent records for compliance reviews and that while CMRs are using these procedures currently, the agency intends to finalize the procedures on May 29, 2020 to ensure that SBA continues to fully document its compliance reviews. Based on the documentation we reviewed and analyzed during our audit, we maintain that SBA does not have clearly documented and maintained records of compliance reviews and should clearly and consistently document its compliance reviews and maintain these records. We will review any additional documentation of records of compliance reviews when SBA provides it in response to this recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees and members, the Secretary of DOD, the Administrator of GSA, the Administrator of NASA, the Administrator of SBA, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our objectives in this report were to examine (1) the extent to which select agencies conduct oversight related to small business subcontracting plans in the pre-award phase of the federal contracting process; (2) the extent to which select agencies conduct oversight of such subcontracting plans in the post-award phase; and (3) steps the Small Business Administration (SBA) has taken to encourage agencies to conduct oversight activities related to small business subcontracting plans. To address the first two objectives, we reviewed the Federal Acquisition Regulation (FAR) and agency-specific procedures. We also reviewed requirements for contractor submissions on subcontracting activity related to subcontracting plans, and corresponding agency oversight requirements for the submissions. We reviewed documentation on agency training for contracting officers related to subcontracting plans and requirements. We judgmentally selected two military agencies—the Defense Logistics Agency (DLA) and the Department of the Navy (Navy)—and two civilian agencies—the General Services Administration (GSA) and the National Aeronautics and Space Administration (NASA)— to review based on our analysis of Federal Procurement Data System- Next Generation (FPDS-NG) data and other factors. More specifically, we selected the agencies because they (1) included a mix of military and civilian agencies, (2) had relatively high dollar amounts of federal contracts awarded in fiscal years 2016–2018, and (3) included a range of performance related to subcontracting based on SBA’s annual procurement scorecard. We also reviewed documentation for a nongeneralizable sample of 32 contracts—eight per agency—awarded in fiscal years 2016–2018 across the four agencies. We randomly selected these 32 contracts from a set of contracts that met several criteria. Specifically, the criteria were contracts with dollar amounts above $1.5 million, that had a mix of subcontracting plans (individual, commercial, and comprehensive) or reasons for not including subcontracting plans in a contract (such as no subcontracting possibilities for the contract or the contract not requiring a subcontracting plan), and a mix of their current status at the time of our selection (completed or active). We selected contracts as follows: We first randomly selected six contracts per agency (total of 24) that had a small business subcontracting plan at the time of award. To do this, we used a random number generator for the universe of contracts meeting the above criteria and selected contracts in the order of the random number generator, but skipped a contract if it was too similar to already-selected contracts (for example, same type of subcontracting plan or similar dollar amount). We then selected another set of contracts—two per agency (total of eight)—that seemed to meet criteria for requiring small business subcontracting plans, such as exceeding the dollar threshold, but were coded in FPDS-NG as not having a plan in place. We also obtained reports on contractor submissions on small business subcontracting activity, where applicable, and agency reviews of the submissions from the Electronic Subcontracting Reporting System (eSRS). Specifically, we searched eSRS for any contractor-submitted individual subcontracting reports (ISR) or summary subcontract reports (SSR), where applicable, for each contract with a subcontracting plan and reviewed the reports along with agency contracting officer comments, approvals, or rejections related to the reports. If we were unable to locate any ISRs or SSRs in eSRS, we asked the procuring agency to provide copies of the reports. We also requested agency documentation for any actions contracting officers took, if applicable, for each contract where the contractor had not met the small business subcontracting goal. We also interviewed officials from each agency about their efforts related to oversight of small business subcontracting plans and these contractor submissions. We assessed the reliability of FPDS-NG data by reviewing available documentation and prior GAO data reliability assessments and by electronically testing for missing data, outliers, and inconsistent coding. We found the data to be reliable for the purposes of selecting agencies and contracts to review. We assessed the reliability of eSRS by reviewing available documentation and verifying information with agencies. We found the information in eSRS to be reliable for purposes of assessing the extent to which agencies conduct oversight related to contractor submission reports in the system. To address the third objective, we reviewed documentation on several types of SBA reviews, including compliance reviews, related to contractor compliance with and agencies’ oversight of subcontracting plans. Specifically, we reviewed documentation on reviews SBA conducted related to its subcontracting program during fiscal years 2016–2019. We also reviewed SBA’s standard operating procedures for the subcontracting program, documentation on processes implementing the new procedures, and documentation on SBA training programs for the small business subcontracting program. We interviewed SBA officials regarding steps the agency takes to encourage agency oversight of subcontracting plans. For all the objectives, we reviewed relevant federal laws and regulations and reviewed previous GAO reports and reports from the Department of Defense Office of Inspector General (DOD OIG). We also interviewed officials from the DOD OIG to obtain an understanding of their work on DOD’s oversight of subcontracting plans at selected DOD components and command centers. We conducted this performance audit from January 2019 to May 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: Comments from the General Services Administration Appendix IV: Comments from the National Aeronautics and Space Administration Appendix V: Comments from the Small Business Administration Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Andrew Pauline (Assistant Director), Tarek Mahmassani (Analyst in Charge), Suellen Foth, Jonathan Harmatz, Julia Kennon, Jill Lacey, Yola Lewis, John McGrail, Marc Molino, and Barbara Roesmann made key contributions to this report.
Why GAO Did This Study Certain federal contracts must have a small business subcontracting plan if subcontracting opportunities exist. But recent Department of Defense Inspector General reports raised concerns about agency oversight of subcontracting requirements. GAO was asked to review oversight of subcontracting plans. Among its objectives, this report discusses (1) the extent to which selected agencies (DLA, GSA, NASA, and Navy) oversee small business subcontracting plans, and (2) how SBA encourages agency compliance with subcontracting plan requirements. GAO reviewed data and documentation for a non-generalizable sample of 32 federal contracts (including 26 contracts with a subcontracting plan) at four agencies, selected to include contracts over $1.5 million at both civilian and military agencies awarded in fiscal years 2016–2018. GAO also reviewed the Federal Acquisition Regulation, SBA and selected agency documentation, and interviewed agency officials. What GAO Found GAO found selected agencies did not consistently follow all required procedures for oversight of small business subcontracting plans, both before and after contracts were awarded. GAO reviewed 26 contracts with a subcontracting plan at four agencies—Defense Logistics Agency (DLA), General Services Administration (GSA), National Aeronautics and Space Administration (NASA), and the Department of the Navy (Navy). For about half of the 26 contracts, agencies could not demonstrate that procedures for Procurement Center Representative (PCR) reviews were followed. These representatives may review small business subcontracting plans and provide recommendations for improving small business participation. When an agency is awarding a contract that includes a subcontracting plan, contracting officers are required to notify these representatives of the opportunity to review the proposed contract. Without taking steps to ensure these opportunities are provided, agencies may not receive and benefit from suggestions for increasing small business participation. For 14 of the 26 contracts, contracting officers did not ensure contractors submitted required subcontracting reports. After a contract is awarded, contracting officers must review reports contractors submit that describe their progress towards meeting approved small business subcontracting goals. In some cases, contracting officers accepted reports with subcontracting goals different from those in the approved subcontracting plans, with no documentation explaining the difference. Without complete and accurate information about a contractor's subcontracting goals, an agency cannot adequately assess a contractor's performance in meeting its subcontracting plan responsibilities. The Small Business Administration (SBA) encourages agency compliance with small business subcontracting plan requirements by providing training to contracting officers and contractors, and by conducting reviews. For instance, SBA Commercial Market Representatives conduct compliance reviews to evaluate a large prime contractor's compliance with subcontracting program procedures and goal achievement. However, SBA could not provide documentation or information on almost all compliance reviews conducted in fiscal years 2016–2018. SBA has developed new procedures for conducting compliance reviews, but as of mid-March 2020, had yet to fully implement them. SBA has conducted fiscal year 2019 compliance reviews that reflect a first phase of their new procedures. SBA has draft guidance on the new compliance review process, including some specific information regarding what Commercial Market Representatives are to record as part of the compliance review. SBA has begun to conduct compliance reviews in accordance with the guidance, but does not have clearly documented and maintained records for the first phase of these reviews. Without consistent, clear documentation and records that will be maintained going forward, SBA's ability to track contractor compliance and agency oversight efforts will be limited. What GAO Recommends GAO is making 10 recommendations for ensuring procedures for PCR reviews are followed, contractor subcontracting reports are monitored and reviewed for accuracy, and SBA compliance reviews are clearly documented and maintained. DLA, GSA, NASA, and Navy concurred with our recommendations. SBA partially concurred with our recommendation. GAO maintains that its recommendation is warranted.
gao_GAO-20-173
gao_GAO-20-173_0
Background Democracy Assistance Program Areas The U.S. government supports various types of democracy assistance, which State and USAID categorize under their democracy, human rights, and governance portfolios. State and USAID use the Updated Foreign Assistance Standardized Program Structure and Definitions to categorize democracy assistance activities in six program areas: rule of law, good governance, political competition and consensus building, civil society, independent media and free flow of information, and human rights. Table 1 shows these six program areas and their elements. State and USAID Entities Providing U.S. Democracy Assistance State bureaus and offices—in particular, DRL and INL—and USAID provide funding for democracy assistance. State. State’s democracy assistance is provided by DRL, INL, and other State bureaus and offices. DRL. As the U.S. government’s primary foreign policy entity advocating for democracy globally, DRL funds programs in every region of the world to promote human rights, democracy, and transparent and accountable governance. INL. INL provides funding for programs that combat crime and narcotics trafficking, including democracy assistance to promote the rule of law, combat corruption, and promote good governance. Other bureaus and offices. Other State bureaus and offices, such as the regional bureaus and the Bureau of International Organization Affairs, provide democracy assistance related to their geographic or functional areas. USAID. As the lead U.S. government agency for international development, USAID considers democracy, human rights, and governance to be central to its core mission. USAID missions overseas play a primary role in providing democracy assistance, and the regional bureaus in Washington, D.C., provide oversight of this assistance. USAID’s Bureau for Democracy, Conflict, and Humanitarian Assistance, headquartered in Washington, D.C., consists of several offices, including two that support the bureau’s mission to promote democratic and resilient societies: the Center of Excellence on Democracy, Human Rights, and Governance and the Office of Transition Initiatives. State and USAID Allocated Over $8.8 Billion for Democracy Assistance in Many of the Same Countries in Fiscal Years 2015- 2018 State and USAID Allocated Over $8.8 Billion in Democracy Assistance, with USAID Providing 67 Percent State and USAID allocated a total of more than $8.8 billion for democracy assistance in fiscal years 2015 through 2018. State allocated 33 percent of this amount—a total of $2.9 billion, averaging approximately $727 million annually—to DRL, INL, and other bureaus to provide democracy assistance. USAID allocated the remaining 67 percent—$5.9 billion, averaging approximately $1.5 billion annually. Figure 1 shows the total amounts that State and USAID allocated for democracy assistance in fiscal years 2015 through 2018. DRL, INL, and USAID Directed Democracy Assistance Allocations to Many of the Same Countries, although Program Areas Varied In fiscal years 2015 through 2018, DRL, INL, and USAID directed allocations for democracy assistance to many of the same countries, although the program areas they supported varied. DRL, INL, and USAID directed democracy assistance allocations to a combined total of 100 countries, including 33 countries where all three entities provided such assistance (see fig. 2). DRL directed democracy assistance allocations to 67 countries; INL, to 45 countries; and USAID, to 84 countries. State officials said that, because the countries have serious democracy-related challenges, the agencies providing this assistance may address these challenges from different perspectives and with different objectives. Although DRL and USAID directed democracy assistance allocations to many of the same countries, DRL focused a greater percentage of its funding in countries where citizens enjoy fewer democratic freedoms. DRL directed 70 percent of its allocations for democracy assistance in fiscal years 2015 through 2018 to less democratic countries—those rated as “not free” by Freedom House’s 2018 “Freedom in the World” survey. In contrast, USAID directed about half of its allocations for democracy assistance during this period to “not free” countries. Similarly, although DRL, INL, and USAID directed their allocations for democracy assistance to many of the same countries, the entities concentrated funding in different program areas. In fiscal years 2017 and 2018, DRL and INL directed the largest percentages of democracy assistance allocations to encouraging human rights and promoting the rule of law, respectively, while USAID directed about half of its democracy assistance allocations to promoting good governance (see fig. 3). As figure 3 shows: DRL directed 36 percent (about $203 million) of democracy assistance allocations to projects supporting human rights, 19 percent (about $107 million) to projects supporting civil society, and 14 percent (about $76.4 million) to projects supporting independent media and free flow of information. DRL directed the smallest amounts to projects supporting rule of law, political competition and consensus building, and good governance. INL directed more than 98 percent (about $580 million) of democracy assistance allocations to promote the rule of law. USAID directed 49 percent (about $1.5 billion) of its democracy assistance allocations to projects promoting good governance and 19 percent (about $600 million) to projects supporting civil society. USAID distributed the remainder across the other four democracy assistance program areas, allocating the smallest amounts to projects supporting human rights and independent media and free flow of information. State’s DRL and INL and USAID Have Defined Roles for Democracy Assistance and Funded Projects in Selected Countries Accordingly State’s DRL and INL and USAID have strategies that define their roles in democracy assistance, and their funding obligations in the selected countries in fiscal years 2015 through 2018 generally aligned with these roles. DRL and INL strategies identify various program areas as aspects of the bureaus’ respective roles in providing democracy assistance. For example, DRL supports a range of democracy program areas and emphasizes human rights, while INL focuses on the rule of law. In fiscal years 2015 through 2018, DRL’s and INL’s funding obligations for democracy assistance in the countries we selected for our review—the DRC, Nigeria, Tunisia, and Ukraine—generally aligned with the roles defined in bureau strategies and described by bureau officials. USAID plays the leading role in U.S. development assistance overseas, including democracy assistance, according to its 2013 strategy on democracy, human rights, and governance. We found that USAID’s democracy assistance in the four selected countries generally aligned with its strategic goal of supporting democratic change to achieve broader development goals. DRL and INL Have Defined Roles for Democracy Assistance and Obligated Funding Accordingly DRL’s Role Includes Human Rights and Other Democracy Assistance Program Areas, While INL Focuses on Rule of Law DRL’s 2018 bureau strategy states that the bureau’s mission is to “champion American ideals as a means of combating the spread of authoritarianism, terrorism, and subversion of sovereign democracies.” According to the strategy, DRL works through diplomatic channels to support democracy-related areas; support human rights, labor, and democracy defenders; and publish reports on human rights in all countries, among other activities. In a 2015 report to Congress, State noted that 90 percent of DRL’s programs operate in restrictive or challenging environments. Although the report did not define restrictive or challenging environments, DRL officials said that the bureau’s assistance focuses on building civil society and supporting diplomatic initiatives to improve governance, particularly in repressive and closed societies. According to the officials, the bureau supports democracy and human rights globally, including in areas where such programs face threats from host governments, and is not constrained to working in countries with a U.S. presence. DRL designs and manages all of its democracy assistance projects from Washington, D.C. DRL officials noted that DRL projects typically receive total allocations of at least $500,000, have a duration of 1 to 5 years, and are implemented by U.S.- based or other large organizations. INL’s most recent bureau strategy states that INL is at the forefront of responding to international security challenges and that INL promotes U.S. leadership by advancing rule-of-law principles. INL officials said that the bureau conducts democracy assistance work to support its provision of security assistance and that INL programming helps governments provide accountability to their citizens. According to agency officials, INL’s funding for democracy assistance generally supports host-country governments through bilateral agreements and is not always project based. INL programs can be managed by INL staff at State’s headquarters in Washington, D.C., and at embassies overseas. INL’s democracy assistance is implemented by its own staff, other U.S. agencies, and U.S.-based or international organizations. DRL and INL officials told us that they ensure consistency between their democracy-related strategic goals and the goals in overarching strategies, such as the government-wide National Security Strategy and State and USAID’s Joint Strategic Plan. The most recent Joint Strategic Plan notes that State and USAID will work to “counter instability, transnational crime, and violence that threaten U.S. interests by strengthening citizen-responsive governance, security, democracy, human rights, and the rule of law.” The Joint Strategic Plan also notes that State and USAID will focus on places that pose the greatest threat to U.S. interests. DRL’s and INL’s Obligations in Selected Countries Reflected Their Defined Roles DRL’s and INL’s total obligations of funding for democracy assistance in the four selected countries for fiscal years 2015 through 2018 generally reflected their defined roles. DRL’s obligations for projects in the selected countries generally reflected the bureau’s focus on supporting democracy and human rights, as defined in DRL’s bureau strategy and described by officials. Overall, the majority of DRL obligations in the four selected countries focused on projects supporting civil society, human rights, and independent media and the free flow of information. In fiscal years 2015 through 2018, 60 to 100 percent of project-level funding was dedicated to these program areas. DRL obligations for democracy assistance projects in the selected countries averaged more than $800,000 for 2 years. Consistent with its stated role of protecting human rights globally, DRL obligated at least a quarter of this funding in three of the four countries to projects that supported human rights (see fig. 4). Similarly, INL’s democracy assistance obligations in the selected countries during the same period generally reflected the bureau’s focus on supporting the rule of law, as defined in its bureau strategy and described by officials. Data for the four countries show that INL obligated $3.2 million in the DRC, $12.5 million in Nigeria, $$3.9 million in Tunisia, and $5 million in Ukraine for democracy assistance for fiscal years 2015 through 2018. In Nigeria, Tunisia, and Ukraine, 100 percent of INL’s democracy-related obligations supported the rule of law. In the DRC, 92 percent of INL’s democracy-related obligations supported the rule of law and the remaining 8 percent supported good governance. (See apps. III through VI for more information on State’s democracy assistance in the DRC, Nigeria, Tunisia, and Ukraine, respectively.) USAID’s Democracy Assistance Strategies and Projects in Selected Countries Generally Reflected the Agency’s Development Focus USAID Provides Democracy Assistance Primarily through Overseas Missions to Support Country Development The 2013 USAID Strategy on Democracy, Human Rights and Governance states that USAID plays the leading role in U.S. development assistance overseas, including democracy assistance. The strategy explains that support for democracy, human rights, and governance is essential to achieving the agency’s broader social and economic development goals, which, USAID has noted, contribute to self-reliance. USAID officials told us that, to support democracy from a development perspective, USAID generally funds multiyear, multimillion-dollar democracy assistance projects that are implemented by U.S.-based or international organizations. USAID’s democracy strategy also identifies the roles of various USAID units involved in implementing U.S. democracy assistance. For example, according to the strategy, USAID missions are to play the primary role in implementing it by both designing and managing democracy-focused programs, while USAID’s Center of Excellence on Democracy, Human Rights, and Governance is to provide technical and other assistance to the missions and manage some mechanisms to support programs, among other things. Further, the strategy clarifies relationships in terms of leading and supporting units in areas of democracy assistance and identifies roles of various other agencies, including State. USAID’s Democracy Assistance in Selected Countries Generally Aligned with Its Defined Role In all four selected countries, USAID’s democracy assistance, as reflected in country-level strategies and projects, generally aligned with the Joint Strategic Plan and with the agency’s democracy strategy to support democratic change in order to achieve broader development goals. We found that the USAID country development cooperation strategy for each of the selected countries articulated democracy assistance objectives to support the country’s overall development. According to USAID officials, these strategies guide the type of democracy assistance provided in a particular country on the basis of the country’s needs and generally focus on supporting sectoral change, such as through policy reform or institution building. For example, the 2016 USAID strategy for Tunisia included a development objective to promote social cohesion through democratic consolidation. Objectives for selected USAID projects in the four countries also reflected the agency’s goal of effecting long-term, development-based change through democracy assistance. For instance, consistent with its country strategy for Tunisia, USAID obligated nearly $22 million in fiscal years 2017 and 2018 for a project designed to improve the relationship between Tunisians and their civic and government institutions, in part by enhancing the responsiveness of government institutions (see fig. 5). Other characteristics of USAID’s democracy assistance projects in the selected countries also reflected the agency’s defined role. In each of the four countries, a democracy office in USAID’s mission in the country managed democracy assistance, consistent with USAID’s democracy strategy. Overall, USAID’s democracy assistance projects in the selected countries demonstrated that the agency implemented multiyear, multimillion-dollar projects, consistent with what USAID officials told us was needed to support long-term development. Data for the four countries showed that USAID’s total obligations for democracy assistance ranged from $49.5 million to $126 million for fiscal years 2015 through 2018 (see fig. 6). Per project, USAID’s obligations in the four countries averaged about $7.2 million, with each project’s implementation period averaging just over 4 years. USAID’s implementing partners were, for the most part, U.S.-based or international organizations. Although USAID democracy assistance obligations in the selected countries covered a variety of program areas, they concentrated on political competition and consensus building, good governance, and civil society. As figure 6 shows, USAID’s obligations for rule-of-law and human rights projects made up less than a quarter of total project-level funding obligated in each country in fiscal years 2015 through 2018. See appendixes III through VI for more information about USAID’s democracy assistance projects in the DRC, Nigeria, Tunisia, and Ukraine, respectively. State and USAID Coordinate on Democracy Assistance in Various Ways, but Embassy Officials Reported Gaps in Information about DRL Projects State and USAID use various mechanisms to coordinate democracy assistance at the headquarters level, such as interagency roundtable discussions of budget allocations. Officials at embassies in the selected countries described interagency coordination efforts at the country level, such as working groups, and provided examples of how coordination helped avoid duplication and improved the effectiveness of democracy assistance efforts. Despite the use of these mechanisms and other steps that DRL takes to coordinate with embassies, embassy officials in all four selected countries reported having incomplete information about DRL’s projects in those countries. State and USAID Coordinate Democracy Assistance through Various Mechanisms at Headquarters and Overseas State and USAID use various mechanisms, including budget roundtables and proposal review panels, to coordinate democracy assistance between the agencies at headquarters. For instance, State’s Office of U.S. Foreign Assistance Resources manages the annual allocations budget process, which facilitates interagency coordination through structured conversations about democracy assistance and various bureaus’ priorities, according to State and USAID officials. These annual democracy discussions also enable the participants to identify policy changes and share lessons learned. USAID officials added that USAID’s Center of Excellence on Democracy, Human Rights, and Governance serves as the technical lead on democracy assistance issues during these interagency budget discussions. INL officials told us that they take the lead in democracy assistance discussions concerning security sector assistance. In addition, some of State’s regional bureaus, including the Bureaus of Near Eastern Affairs and of European and Eurasian Affairs, maintain assistance coordination offices to coordinate U.S. foreign assistance to countries in those regions, including through strategic planning and budget formulation processes. These offices, based in Washington, D.C., coordinate with embassies, other State bureaus, and USAID at various stages of strategic planning and budget formulation. For example, country coordinators from the Bureau of Near Eastern Affairs’ assistance coordination office are to lead roundtable discussions at least annually to share information among U.S. government agencies and contribute to improved planning and implementation. Some U.S. embassies in these regions, including those in Tunisia and Ukraine, have an assistance coordination unit to coordinate all U.S. foreign assistance in the country, and these units work with State regional bureaus’ Washington, D.C.– based offices. Further, when considering potential democracy assistance projects, DRL coordinates with State and USAID counterparts both in Washington, D.C., and overseas through its proposal review process. DRL proposal review panels include representatives from USAID, State regional bureaus, and other agencies that may have relevant expertise. State and USAID also use various interagency mechanisms to coordinate democracy assistance at the country level within embassies overseas. Examples of coordination mechanisms include the following. Working groups. According to State and USAID officials in the four selected countries, interagency working groups facilitate formal discussions about democracy assistance projects and provide opportunities to identify areas where agencies’ projects might complement or duplicate one another. Working groups at each embassy vary in number, theme, and meeting frequency, depending on the country context and U.S. government priorities. For example, the U.S. embassy in Ukraine has about 10 democracy-related working groups, focused on themes including elections, anticorruption, human rights, and the justice sector. At the U.S. embassies in the DRC and Nigeria, agency officials told us they convened working groups on elections, given the U.S. government’s interest in the countries’ recent and upcoming elections. In Tunisia, where USAID reestablished a presence in 2012 and a mission in June 2019, an interagency development assistance working group that addresses democracy issues, among other things, began meeting in September 2018, according to agency officials. The officials also said that a security assistance working group coordinated assistance related to rule-of- law issues. These working groups meet bimonthly, monthly, or weekly, according to officials. State and USAID officials generally said that they found the working groups were effective in helping to coordinate democracy assistance. Assistance coordination units. U.S. embassies in Tunisia and Ukraine have assistance coordination units designed to coordinate U.S. foreign assistance, including democracy assistance. Unlike the assistance coordination unit in Ukraine, State’s Foreign Assistance Unit in Tunisia managed democracy assistance projects in fiscal years 2015 through 2018 while also coordinating other State and USAID assistance in the country (see app. V for more information about democracy assistance in Tunisia during this period). According to a State document, the assistance coordinator at an embassy in Europe or Eurasia can be a “touch point” for agencies at the embassy to work together on assistance issues and communicate effectively with Washington. The assistance coordination units in both Tunisia and Ukraine have established mechanisms to coordinate U.S. foreign assistance within the embassies, according to officials. For instance, the foreign assistance unit in Tunisia formalized a process by which the ambassador’s office approves all State and USAID assistance projects in the country. Additionally, in both countries, the assistance coordinator participates in working groups and is involved in the design or review of all assistance projects, according to officials. USAID and State officials in these countries expressed varying opinions about the units’ usefulness for coordination. State and USAID officials in the selected countries provided the following additional examples of coordination that, according to the officials, helped avoid duplication and improved the effectiveness of democracy assistance efforts. According to State and USAID, informal coordination and information sharing among agency officials at the embassies occur during regularly scheduled meetings, such as weekly meetings of USAID staff, State’s political unit staff, or embassy senior staff, and through daily interaction. State has developed a tool kit to help embassies with strategic planning, including the development of action plans to document units’ roles. For example, agencies at the U.S. embassy in Nigeria created an action plan that identified the various units supporting assistance for elections to help prevent duplication of efforts. (Fig. 7 shows citizens participating in Nigeria’s elections.) State and USAID officials at embassies described other coordination of the agencies’ democracy assistance. For example, in Nigeria, USAID does not fund any rule-of-law projects because, according to USAID officials, they and INL officials have decided on a clear division of labor: INL manages all rule-of-law projects, including judicial strengthening, judicial reforms, and anticorruption, while USAID manages all other aspects of democracy assistance. In Ukraine, USAID and INL developed a concept paper to guide their collaboration to help the government establish the country’s High Anti- Corruption Court. The concept paper outlined the key roles of USAID and INL and designed complementary projects based on each agency’s strengths. For example, USAID was responsible for developing training programs for judges and INL was responsible for vetting potential judges. Officials told us that this concept paper helped agencies maximize the potential impact of their limited resources. State Officials at Embassies Reported Gaps in Information about DRL’s Democracy Assistance in Selected Countries Although DRL takes steps to coordinate with embassies in countries where it funds democracy assistance projects, embassy officials in all four selected countries reported having incomplete information about DRL’s projects in those countries. DRL has various practices and processes to coordinate with embassies. For example, DRL established a standard operating procedure to clarify methods for coordination between itself and State’s regional bureaus, which includes defined steps on engaging with embassies. The procedure outlines steps in DRL’s annual planning process, during which priorities and program strategies are set; in the process for submitting proposed projects and awards; and in the process for proposal review panels. DRL officials in Washington, D.C., also pointed to various methods that they use to coordinate with embassies. Such methods include distributing a description of DRL’s projects by country on an annual basis, training new Foreign Service officers in DRL’s funding mechanisms and awards process, and providing contact information for DRL staff at headquarters to embassy personnel. Additionally, DRL officials said that embassy officials have at least four opportunities to provide official input during the approximately 18-month process of designing and awarding a project. According to DRL officials, embassy personnel designated as human rights officers serve as DRL’s overseas points of contact. However, at the embassies in all four countries, human rights officers or other officials from the political units told us that they were not actively engaged in DRL’s projects and generally lacked updated information about DRL projects in their countries, including descriptions and funding amounts. Embassy officials also said that, although DRL sought their input during the process of selecting proposed democracy assistance projects, DRL did not subsequently communicate its final selection of projects. DRL officials said that sharing complete information can be difficult because of the sensitivity of some DRL projects and the need to safeguard the identities of some local partners. In addition, DRL officials said that managing projects from Washington, D.C., instead of overseas may affect their ability to collaborate with embassy officials. DRL officials commented that embassy personnel’s colocation facilitates their collaborating with one another and that the political and other State officers who may function as in-country DRL points of contact have numerous other duties, with limited capacity to focus on DRL projects. DRL officials also said that frequent turnover among State personnel makes it challenging to maintain embassy officials’ awareness of DRL’s in-country projects. In addition, they said that DRL is sometimes unaware of democracy assistance projects that embassies may be funding. Moreover, we found that existing information-sharing mechanisms, including data systems and strategies, do not consistently address embassy personnel’s information gaps. DRL and other State officials said that embassy personnel may not be able to use State’s data systems to retrieve information on projects, partly because some personnel lack sufficient training or the permissions to access project data in certain systems. Furthermore, the Office of Management and Budget has found the quality of State’s publicly reported data to be low in terms of completeness and accuracy. State’s Office of Inspector General found that, while State has standardized and centralized its foreign assistance budget planning and request processes, State’s inability to provide authoritative foreign assistance financial information is a program management challenge. In addition, the integrated country strategies for the four selected countries for fiscal years 2015 through 2018 do not mention DRL’s projects or general goals when discussing U.S. government democracy-related objectives for each country. Overseas officials’ lack of complete information about DRL’s projects could lead to potential duplication in U.S. democracy assistance and may inhibit State’s efforts to coordinate with other agencies, implementing partners, and other donors. We have previously found that it is helpful when participants in a collaborative effort have full knowledge about the relevant resources available and have the appropriate knowledge, skills, and abilities to contribute. Conclusions Since 2015, Congress has made available to agencies at least $2 billion annually for democracy assistance programs abroad. State’s DRL and INL, as well as USAID, have articulated their roles in democracy assistance through strategies that include specific democracy-related goals. Although State and USAID use various mechanisms to coordinate democracy assistance at headquarters and in the field, we found that relevant embassy officials in each of the four selected countries did not have ready access to information about DRL projects. As a result, embassy officials lacked an understanding of the full scope of U.S. democracy assistance in their countries. Ensuring access to information about DRL projects could improve State’s overseas coordination, both internally and with other U.S. agencies, implementing partners, and donors, as well as State’s ability to achieve important democracy assistance goals. Recommendation for Executive Action The Secretary of State should direct the Assistant Secretary of State for Democracy, Human Rights, and Labor to develop a mechanism to facilitate the active sharing of information about democracy assistance projects between DRL and relevant staff at embassies. Agency Comments We provided a draft of this report to State, USAID, and NED for their review and comment. In its written comments, reproduced in appendix VII, State agreed with our recommendation and noted steps that it plans to take to implement it. USAID also provided written comments, which are reproduced in appendix VIII, as well as technical comments that we incorporated as appropriate. NED officials reviewed our draft but did not provide any comments. We are sending copies of this report to the appropriate congressional committees and to the Secretary of State, the Administrator of USAID, the President of NED, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Appendix I: National Endowment for Democracy’s Democracy Assistance The National Endowment for Democracy (NED) is a private, nonprofit, nongovernmental organization based in Washington, D.C., whose stated purpose is to encourage democracy throughout the world by supporting nongovernmental organizations and actors that are working for democratic goals. NED is funded through a grant from the Department of State (State) pursuant to an annual congressional appropriation and receives additional funding from State to support congressionally directed or discretionary programs. In addition to providing grants to local organizations in other countries, NED provides grants to its four affiliated organizations known as the “core institutes”: the Center for International Private Enterprise, the International Republican Institute, the National Democratic Institute, and the Solidarity Center. NED Allocated More Than $500 Million for Democracy Assistance Projects in 100 Countries in Fiscal Years 2015-2018 In fiscal years 2015 through 2018, NED allocated a total of about $541 million for democracy assistance projects in 100 countries— approximately $114 million in fiscal year 2015, $141 million in fiscal year 2016, $144 million in fiscal year 2017, and $142 million in fiscal year 2018. During this period, NED directed 55 percent of its funding for local organizations to groups in countries rated “not free” by Freedom House’s 2018 “Freedom in the World” survey. Figure 8 shows the countries where NED allocated funding for democracy assistance in fiscal years 2015 through 2018. As figure 9 shows, in fiscal years 2017 and 2018, NED directed funding to projects in six democracy assistance program areas. NED allocated the largest amount during that period—about $100 million (36 percent)—to promote good governance and allocated the next largest amount—about $72.5 million (26 percent)—to promote political competition and consensus building. NED allocated the smallest amount—about $8.5 million (3 percent)—to support the rule of law. NED’s Strategy Identifies NED’s Role as Providing Democracy Assistance to Local Organizations According to NED’s 2012 strategy, the organization focuses on providing grants to grassroots activists in response to local needs and “seeks out newly-emerging groups in both democratizing and authoritarian countries around the world, helping to empower the most effective grassroots activists.” The strategy notes that NED is guided by its founding legislation, which established NED as an independent institution whose mission is to promote democracy through grants to nongovernmental organizations. These include the core institutes, whose key roles NED’s strategy also defines. NED officials said that the organization focuses on building the institutional capacity of local civil society organizations, which contributes to building democratic societies. Such capacity building can include institutional support, including funding for basic functions such as operational costs, and management assistance such as budget training, which other donors tend not to provide. NED officials commented that the organization is “demand driven” and responds to funding requests for projects proposed by nongovernmental organizations. According to NED documents, it supports approximately 1,500 organizations in 90 countries with grants averaging $50,000. NED officials noted other elements that distinguish NED’s support from that of U.S. agencies, including continuity in its staff composition; the significant linguistic ability of its staff, enabling close ties with local organizations in other countries; and the relative stability of its mission and priorities, which facilitates long-term engagement on countries’ democratic issues. In addition, NED’s nongovernmental status allows it to provide democracy assistance in difficult environments, where, according to NED officials, staff of local grantees face risks as a result of their work in challenging the government and status quo. The officials said that such risks range from detention and harassment to being killed or “disappeared.” NED’s Democracy Assistance Projects in Selected Countries Generally Aligned with Its Defined Role NED’s democracy assistance projects in the countries we selected for our review—the Democratic Republic of the Congo (DRC), Nigeria, Tunisia, and Ukraine—generally aligned with the organization’s strategy of supporting democracy by providing funds for indigenous civil society organizations. (Fig. 10 shows examples of NED’s democracy assistance projects in the DRC and Ukraine.) Consistent with NED’s strategy of providing grants to grassroots activists, data for projects in the four selected countries show that NED provided grants primarily to local civil society organizations in addition to its core institutes. NED grants to civil society organizations in the selected countries averaged approximately $46,000 for year-long projects, and NED renewed support for nearly all organizations on an annual basis, reflecting the long-term support that officials said was necessary to strengthen civil society. Grantees in the DRC told us that NED worked closely with local partners to identify needs and design programs and that this helped to build the partners’ organizational capacity. Consistent with NED’s mission to support democracy in general, grantees in the selected countries worked on projects that included all democracy assistance program areas. NED primarily supported projects to promote political competition and consensus building and good governance, obligating an average of 40 percent and 36 percent of its funding for these two program areas, respectively, across the four countries (see fig. 11). NED’s country priorities are articulated in country summaries that it updates each year on the basis of each country’s political context and democratic challenges. For example, NED’s 2018 Tunisia summary included a priority of supporting civil society to promote effective, democratic governance and advocate for the transparency and accountability of public institutions. The NED project that we reviewed in Tunisia aimed to “enhance the capacity of civil society to advocate for transparency, good governance, and promote social accountability in the six southern governorates of Tunisia.” See appendixes III through VI for more information about NED’s democracy assistance projects in the DRC, Nigeria, Tunisia, and Ukraine. NED Documents and Officials Described Coordination and Collaboration Practices NED’s annual planning documents, which generally outline objectives for each country where NED provides funding, include some statements about coordination and collaboration with other donors. NED officials said that NED senior leaders typically have standing relationships with senior leaders at State’s Bureau of Democracy, Human Rights, and Labor (DRL) because NED receives funding from DRL for particular countries. NED officials also told us that the U.S. Agency for International Development (USAID) has reached out to them to strategically coordinate, although NED does not receive funds from USAID. NED officials added that coordination and collaboration on specific countries largely occur between officials at the regional and country levels. For example, officials said that NED consults with counterparts at State and USAID in the regional bureaus and DRL and shares its list of grantees with DRL. Furthermore, officials said that NED is aware of funding that its grantees receive from State or USAID, because NED obtains information from potential grantees about other funding sources during the grant proposal process. According to NED, State, and USAID officials, additional collaboration occurs between headquarters and overseas officials. NED, which does not have staff overseas, manages its grants in Washington, D.C., but collaborates with overseas counterparts. NED, State, and USAID officials told us that when NED officials conduct site visits, which occur at least annually, they often meet with State and USAID officials at embassies to share information. Appendix II: Objectives, Scope, and Methodology This report examines (1) the Department of State’s (State) and the U.S. Agency for International Development’s (USAID) allocations of funding for democracy assistance in fiscal years 2015 through 2018, (2) State’s and USAID’s roles in providing democracy assistance and the extent to which their projects in selected countries during this period were consistent with defined roles, and (3) the extent to which State and USAID coordinate in providing democracy assistance. In addition, appendix I provides information about the National Endowment for Democracy’s (NED) democracy assistance allocations, role, and coordination. To examine aspects of State’s, USAID’s, and NED’s democracy assistance roles and coordination efforts, we selected a nongeneralizable sample of four countries—the Democratic Republic of the Congo (DRC), Nigeria, Tunisia, and Ukraine—where the three entities provided democracy assistance in fiscal years 2015 through 2018. In selecting these countries as illustrative examples, we considered the following factors, among others: (1) countries to which all three entities allocated or obligated democracy assistance funding in fiscal years 2015 through 2017, the most recent period for which data were available; (2) democracy assistance allocation amounts that were in the top quartile for each entity for the same period for USAID and State, according to data from State’s Office of U.S. Foreign Assistance, and for NED; (3) democracy assistance obligation amounts that were in the top half of such obligations for the same period for State’s Bureau of Democracy, Human Rights, and Labor (DRL), according to data from USAID’s Foreign Aid Explorer; (4) democracy assistance obligations data that confirmed the presence of the Bureau of International Narcotics and Law Enforcement Affairs (INL) in those countries for the same period; (5) geographical dispersion of the countries; (6) ratings that countries received from Freedom House’s 2018 “Freedom in the World” survey; and (7) suggestions from State, USAID, and NED officials as well as others with relevant expertise. We excluded countries where we had recently reviewed U.S. democracy assistance for other reports. We traveled to the DRC in May 2019, where we met with officials from State, USAID, nongovernmental organizations that had implemented U.S.-funded democracy assistance projects, and the United Kingdom’s Department of Foreign and International Development regarding its coordination with U.S. agencies. We conducted interviews with State and USAID officials who were knowledgeable about democracy assistance, interviewing officials at the embassies in Nigeria, Tunisia, and Ukraine by phone and interviewing officials in Washington, D.C., in person. To examine allocations for democracy assistance, we analyzed State, USAID, and NED global democracy assistance data for fiscal years 2015 through 2018, including the total allocations, the allocations for specific program areas, and the countries for which funding was allocated. We used the six democracy assistance program areas included in USAID’s and State’s Updated Foreign Assistance Standardized Program Structure and Definitions—rule of law, good governance, political competition and consensus building, civil society, independent media and free flow of information, and human rights. Because NED categorizes its democracy assistance using its own program definitions, we cross-referenced NED’s democracy assistance awards with the U.S. government’s six program areas, using information that NED provided. We assessed the reliability of State’s, USAID’s, and NED’s data and determined the data to be sufficiently reliable for reporting the total amount of democracy assistance allocated by each entity as well as the program areas and countries for which the funding was allocated. We also compared funding allocations with the country’s ratings in Freedom House’s 2018 “Freedom in the World” survey to determine the amount of funding that the entities allocated to countries rated as free, partly free, or not free. To identify State’s, USAID’s, and NED’s roles in providing democracy assistance and the extent to which their projects in the selected countries were consistent with their defined roles, we reviewed documents, assessed information on democracy assistance projects, and interviewed officials. While State’s regional bureaus provide some democracy assistance, we focused on State’s democracy assistance roles and projects for DRL and INL, both of which State has identified as leading the provision of its democracy assistance. See appendixes III through VI for regional bureaus’ obligations data for the four selected countries. We reviewed State’s and USAID’s Joint Strategic Plan, FY2018-2022; functional bureau strategies for DRL and INL; the 2013 USAID Strategy on Democracy, Human Rights, and Governance; integrated country strategies and country development cooperation strategies for the four selected countries; and NED’s 2012 Strategy Document. We also reviewed other documents that described aspects of State’s and USAID’s roles, including agencies’ democracy-related reports to Congress and standard operating procedures. We assessed these documents for clarity of roles and responsibilities, based on leading collaboration practices that we have previously identified, and we reviewed agencies’ overarching goals related to democracy and governance. We reviewed information about State, USAID, and NED democracy assistance projects in the DRC, Nigeria, Tunisia, and Ukraine. We reviewed project documents, including award agreements, for selected State, USAID, and NED projects that supported a variety of democracy program areas, among other factors. We assessed State, USAID, and NED obligations data for projects that they funded in the selected countries in fiscal years 2015 through 2018. We determined that these data were sufficiently reliable for reporting the total obligations, by entity and country, for fiscal years 2015 through 2018 and for reporting types of democracy assistance. We also determined these data to be sufficiently reliable for reporting the number of active projects during this time period; the average award amount or average annualized award amount; and the average duration of projects for DRL, USAID, and NED. Because INL’s democracy assistance generally supports the host government through bilateral agreements and is not always project based, we were unable to report these project characteristics for INL. In prior work, we have recommended that State identify and address factors that affect the reliability of INL’s democracy assistance data. State reported that as of July 2019, INL was continuing efforts to improve data reliability; however, because of missing data, we determined that data for INL democracy assistance in the selected countries were unreliable for reporting project characteristics. We also determined that because of missing data, such as project end dates, the data from State’s Bureau of African Affairs were unreliable for reporting some project information for Nigeria; however, the bureau’s project data for the DRC were sufficiently reliable for reporting on democracy assistance and obligations in that country. In addition, we determined the data from the Bureaus of European and Eurasian Affairs and Near Eastern Affairs were sufficiently reliable for reporting on State’s democracy assistance obligations and projects in Ukraine and Tunisia. We interviewed officials in Washington, D.C., and in the four selected countries regarding State’s, USAID’s, and NED’s roles defined in strategies and other documents and regarding democracy assistance projects. In addition, we interviewed agency officials regarding democracy assistance program areas; implementation methods (such as managing programs from headquarters or overseas as well as types of implementing partners); and other features, including typical scale of project funding. To examine the extent to which the agencies coordinated their democracy assistance, we reviewed relevant documents, such as State’s and USAID’s standard operating procedures, to identify the agencies’ mechanisms and practices for coordinating democracy assistance. We drew on our prior work identifying key practices that can enhance and sustain collaboration at federal agencies. We interviewed officials in Washington, D.C., and in the four selected countries to describe any mechanisms that agencies use to coordinate democracy assistance. We conducted this performance audit from September 2018 to January 2020, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: U.S.-Funded Democracy Assistance in the Democratic Republic of the Congo, Fiscal Years 2015-2018 The Democratic Republic of the Congo (DRC) has experienced more than 2 decades of violence and war, exacerbated by the failure of President Joseph Kabila to hold elections when his term ended in 2016. In this context, the U.S. government’s key policy priority was to encourage the DRC’s government to support credible and peaceful elections in December 2018, according to the Department of State (State). U.S. government democracy assistance projects aimed to build the capacity of the DRC government, political parties, civil society, armed forces, civilian law enforcement, and justice systems to support credible elections and improve governance. (Fig. 12 shows examples of U.S.- funded government assistance to support the DRC’s 2018 elections.) Other U.S. government democracy-related priorities included promoting the rule of law and fighting corruption. The National Endowment for Democracy’s (NED) 2018 country summary for the DRC noted that NED should support DRC civil society’s ability to retain its independence and to continue advocating for a peaceful and democratic transition of power. The summary states that NED’s 2018 priorities for the DRC included supporting civil society’s engagement in elections and ability to promote freedom of information before, during, and after the elections. In fiscal years 2015 through 2018, State, the U.S. Agency for International Assistance (USAID), and NED obligated over $73 million for democracy assistance in the DRC. State’s Bureau of Democracy, Human Rights, and Labor obligated $5.5 million (8 percent) of this assistance, while State’s Bureau of International Narcotics and Law Enforcement Affairs obligated $3.2 million (4 percent). State’s Bureau of African Affairs also obligated about $500,000, for one project, through the Africa Women Peace Security Initiative. USAID obligated the majority of U.S. democracy assistance—$54.7 million (74 percent). In addition, NED obligated $9.6 million (13 percent). Figure 13 shows State’s, USAID’s, and NED’s total obligations, by program area, in the DRC during this period. Table 2 shows characteristics of projects funded by State’s Bureau of African Affairs, DRL, USAID, and NED. Three of DRL’s five projects were implemented by organizations that also implemented USAID projects, and the Bureau of African Affairs’ project was implemented by an organization that also implemented USAID and DRL projects. Table 3 shows examples of democracy assistance projects funded by State, USAID, and NED in the DRC in fiscal years 2015 through 2018. Appendix IV: U.S.-Funded Democracy Assistance in Nigeria, Fiscal Years 2015- 2018 While Nigeria has made important gains in democracy and institution building, those gains are fragile, according to the U.S. Department of State (State). The U.S. government’s recent priorities with regard to Nigeria have included helping to strengthen the country’s democratic governance. Challenges to democratic governance in Nigeria include widespread intercommunal violence, terrorism, poverty, and corruption. At the same time, Nigeria has a free press and a political environment that is largely committed to civilian leadership, and the 2015 elections resulted in the first peaceful transfer of power to an opposition party. In this context, the U.S. government’s goals include strengthening Nigerian democratic institutions, governance, and respect for human rights, such as by assisting Nigerians to conduct credible national elections in 2019. To achieve this goal, the U.S. government’s objectives are to (1) strengthen good governance; (2) strengthen democratic institutions, including rule of law, respect for human rights, and transparency and accountability in government; and (3) reduce corruption at all levels of government. Similarly, the National Endowment for Democracy’s (NED) 2018 country summary for Nigeria notes the success of the country’s 2015 elections while also acknowledging challenges including corruption, economic stagnation, insecurity, and the political marginalization of minority groups. NED’s 2018 priorities in Nigeria were to expand political inclusion and strengthen rule of law by supporting NED’s core institutes and local organizations. In fiscal years 2015 through 2018, State, the U.S. Agency for International Development (USAID), and NED obligated nearly $95 million for democracy assistance projects in Nigeria. State’s Bureau of International Narcotics and Law Enforcement Affairs obligated $12.5 million (13 percent), while State’s Bureau of Democracy, Human Rights, and Labor obligated $5.4 million (6 percent). State’s Bureau of African Affairs also obligated $1.8 million for six projects. According to officials, the Bureau of African Affairs funded these projects through the Africa Regional Democracy Fund and the Trans-Sahara Counterterrorism Partnership program. USAID obligated the majority of U.S. democracy assistance— $66.6 million (70 percent). In addition, NED obligated $8.2 million (9 percent). Figure 14 shows State’s, USAID’s, and NED’s total obligations for democracy assistance, by program area, in Nigeria during this period. Table 4 shows characteristics of projects funded by the Bureau of African Affairs, DRL, USAID, and NED in Nigeria during fiscal years 2015 through 2018. Table 5 shows examples of democracy assistance projects funded by State, USAID, and NED in Nigeria during fiscal years 2015 through 2018. Appendix V: U.S.-Funded Democracy Assistance in Tunisia, Fiscal Years 2015- 2018 Since its 2011 revolution, Tunisia has been on a steady path toward consolidating its democratic transition, but it still needs to establish critical institutions, advance human rights, counter corruption, and improve government transparency, according to the U.S. Department of State (State). In this context, the U.S. government’s goals include helping Tunisia consolidate and advance its democracy. To achieve this goal, the U.S. government’s objectives are to (1) assist Tunisian government institutions to become more transparent, accountable, and responsive to citizens; (2) help Tunisian citizens understand and exercise their rights and responsibilities in a democratic system; and (3) promote social cohesion through democratic consolidation. The National Endowment for Democracy’s (NED) 2018 country summary for Tunisia similarly notes the country’s democratic progress since the 2011 revolution and adds that Tunisian civil society has been developing quickly and freely and seeks to engage with elected officials as they continue to consolidate democracy. NED’s 2018 priorities in Tunisia were to (1) support civil society to promote effective, democratic governance and advocate for transparency and accountability; (2) encourage citizens to influence policymaking; (3) foster political inclusion of marginalized groups; and (4) enhance the role of independent media. In fiscal years 2015 through 2018, State, the U.S. Agency for International Development (USAID), and NED obligated over $90 million for democracy assistance projects in Tunisia. State’s Bureau of Near Eastern Affairs obligated $20.7 million (23 percent) of these funds; the Bureau of Democracy, Human Rights, and Labor obligated $9.1 million (10 percent); and the Bureau of International Narcotics and Law Enforcement Affairs obligated $3.9 million (4 percent). USAID obligated the majority of U.S. democracy assistance—$49.5 million (54 percent). In addition, NED obligated $8.7 million (9 percent). Figure 15 shows State’s, USAID’s, and NED’s total obligations for democracy assistance, by program area, in Tunisia in fiscal years 2015 through 2018. State’s Bureau of Near Eastern Affairs provided the majority of its democracy assistance through the U.S.–Middle East Partnership Initiative, which generally aims to improve governance and economic opportunity. Many of the 11 projects funded by the bureau supported objectives that were similar to those typically supported by DRL, INL, and USAID projects, including promoting human rights, supporting anticorruption institutions, and strengthening political parties. The Bureau of Near Eastern Affair’s Foreign Assistance Unit at the embassy managed these projects. Table 6 shows information on the characteristics of the projects funded by DRL, the Bureau of Near Eastern Affairs, USAID, and NED. Table 7 shows examples of democracy assistance projects funded by State, USAID, and NED in Tunisia during fiscal years 2015 through 2018. Appendix VI: U.S.-Funded Democracy Assistance in Ukraine, Fiscal Years 2015- 2018 Ukraine’s various democratic challenges include overcoming the legacy of Soviet authoritarian rule, addressing mismanagement, and responding to Russian aggression, according to the Department of State (State). In this context, the U.S. government aims to support Ukraine’s democracy by helping the country combat corruption, advance justice reforms, bolster civil society, create responsive government, and encourage independent media. Overall, the U.S. government seeks to help Ukraine advance its political reforms with more transparent, responsive, and accountable governance, becoming less corrupt and more democratic. U.S. objectives to accomplish this goal include enhancing anticorruption and rule-of-law processes and improving governance processes and outcomes. The National Endowment for Democracy’s (NED) 2018 country summary for Ukraine noted similar challenges to the country’s democracy— Russian aggression, corruption, and a government that is not responsive to its citizens. NED’s 2018 priorities in Ukraine included strengthening the capacity of civil society groups, promoting reconciliation, and fostering the development of new media. In fiscal years 2015 through 2018, State, the U.S. Agency for International Development (USAID), and NED obligated more than $170 million for democracy assistance projects in Ukraine. State’s Bureau of European and Eurasian Affairs obligated $16.7 million (10 percent) of this assistance; the Bureau of Democracy, Human Rights, and Labor obligated $9.6 million (6 percent); and the Bureau of International Narcotics and Law Enforcement Affairs obligated $5.0 million (3 percent). USAID obligated the majority of U.S. democracy assistance—$126 million (73 percent). In addition, NED obligated $16.3 million (9 percent). Figure 16 shows State’s, USAID’s, and NED’s total obligations for democracy assistance, by program area, in Ukraine during fiscal years 2015 through 2018. State’s public affairs unit at the embassy in Ukraine obligated funding for, and managed, all but one of the 613 democracy assistance projects supported by funds from the Bureau of European and Eurasian Affairs. State’s public affairs unit awarded the projects through funding mechanisms that were intended to support civil society and independent media and were specifically designed for locally based implementing organizations. Table 8 shows characteristics of democracy assistance projects funded by DRL, State’s Bureau of European and Eurasian Affairs, USAID, and NED. Table 9 shows examples of democracy assistance projects funded by State, USAID, and NED in Ukraine during fiscal years 2015 through 2018. Appendix VII: Comments from the Department of State Appendix VIII: Comments from the U.S. Agency for International Development Appendix IX: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgements In addition to the contact named above, Mona Sehgal (Assistant Director), Farhanaz Kermalli (Analyst-in-Charge), Daniela Rudstein, Tom Zingale, Neil Doherty, Reid Lowe, and Alex Welsh made key contributions to this report. Justin Fisher and Sarah Veale provided technical assistance.
Why GAO Did This Study Congress made at least $2 billion available to agencies annually for democracy assistance abroad in fiscal years 2015 through 2018. State and USAID are the primary U.S. agencies funding democracy assistance. This assistance supports activities related to enhancing rule of law, good governance, political competition and consensus building, civil society, independent media, and human rights. Congress included a provision in the Joint Explanatory Statement accompanying the fiscal year 2015 Continuing Appropriations Act for GAO to review agencies' roles and responsibilities in promoting democracy abroad. This report examines (1) State's and USAID's democracy assistance allocations, (2) State's and USAID's roles in providing democracy assistance and the extent to which their projects in selected countries are consistent with their defined roles, and (3) how State and USAID coordinate on democracy assistance. GAO reviewed State and USAID data and documents for fiscal years 2015 through 2018 and interviewed officials in Washington, D.C., and in the DRC, Nigeria, Tunisia, and Ukraine. GAO selected these countries because they received relatively high amounts of democracy assistance funding from State and USAID, among other factors. What GAO Found The Department of State (State) and U.S. Agency for International Development (USAID) allocated more than $8.8 billion for democracy assistance in fiscal years 2015 through 2018. a According to agency officials, language in the 2015 appropriations act permitted State and USAID to allocate less than the full amount directed to democracy programs by the act. State and USAID have defined roles for democracy assistance and have obligated funding for projects in selected countries accordingly. State has identified its Bureau of Democracy, Human Rights, and Labor (DRL) as the U.S. lead for promoting democracy and protecting human rights abroad and has identified its Bureau of International Narcotics and Law Enforcement Affairs (INL) as the lead for promoting the rule of law. In fiscal years 2015 through 2018, DRL's and INL's obligated funding for democracy assistance in the countries GAO reviewed—the Democratic Republic of the Congo (DRC), Nigeria, Tunisia, and Ukraine—generally reflected their defined roles. For example, 24 to 77 percent of DRL's obligated funding in these countries supported human rights, and at least 90 percent of INL's obligated funding for democracy assistance in the countries supported the rule of law. USAID's democracy assistance strategy states that USAID has the leading role in U.S. development assistance. USAID's obligations for democracy assistance in the four countries supported multiyear, multimillion-dollar projects, consistent with what USAID officials told GAO was needed for long-term development. State and USAID coordinate on democracy assistance in various ways, but embassy officials reported gaps in information about DRL assistance. Examples of coordination mechanisms include budget allocation discussions at headquarters and working groups at embassies to help avoid project duplication. However, State officials in all four selected countries said they generally lacked information about DRL democracy assistance projects, including project descriptions and funding amounts. State's existing information-sharing mechanisms, including data systems and strategies, do not consistently address these gaps. Overseas officials' lack of complete information about DRL's projects may inhibit State's efforts to coordinate with other agencies, implementing partners, and other donors. What GAO Recommends GAO recommends that the Secretary of State direct DRL to develop a mechanism for the sharing of democracy assistance project information between DRL and relevant embassy staff. State concurred with GAO's recommendation.
gao_GAO-20-300
gao_GAO-20-300_0
Background Payment for Hospital Services under Medicare Under traditional Medicare, hospitals are paid for the inpatient and outpatient services they provide under two distinct payment systems. Inpatient stays, including services incurred after being admitted to the hospital, are paid under the IPPS. Under this system, Medicare pays hospitals a flat fee per beneficiary stay, set in advance, with different amounts generally based on the beneficiary’s condition. Payment rates are also influenced by hospital-specific factors such as the relative hourly wage in the area where the hospital is located, and whether the hospital qualifies for other case- or hospital-specific additional payments. Outpatient services, including services obtained through the emergency department or other services incurred without being admitted to the hospital, are paid under the outpatient prospective payment system. Under this system, Medicare pays hospitals a flat fee per service, set in advance, with different amounts for each type of service. As with the IPPS, payment rates are adjusted for geographic factors. Congress has established payment adjustments for certain hospitals under the IPPS by changing the qualifying criteria for IPPS payment categories, creating and extending exceptions to IPPS rules, or exempting certain types of hospitals from the IPPS. These adjustments may help ensure beneficiary access to care or to help hospitals recruit and retain physicians and other medical professionals. MDH Designation Eligibility Criteria Created through the Omnibus Budget Reconciliation Act of 1989, the MDH designation is an example of how Congress can enhance payments to certain hospitals. To qualify as an MDH, a hospital must demonstrate that it is: Medicare-dependent, defined as having at least 60 percent of their inpatient days or discharges attributable to Medicare beneficiaries; small, defined as having 100 or fewer beds; and rural, defined as being located in a rural area, though hospitals can also be eligible if they are located in a state without any rural areas. CMS regulations provide that hospitals can meet the requirement of demonstrating a 60 percent Medicare share of days or discharges using two of the three most recently settled cost reports, or using cost reports from 1987 or 1988. We refer to hospitals that meet this criterion using 1987 or 1988 cost report data as “legacy MDHs.” MDH Designation Payment Criteria and Payment Methodology Some, but not all, MDHs are eligible to receive additional payment each year if they meet the payment criterion. Specifically, MDHs are assigned a payment rate—known as the hospital-specific rate (HSR)—based on their historic reported inpatient operating costs, trended forward to adjust for inflation and other factors, from one of three years (1982, 1987, or 2002). If the payment based on the HSR is higher than what the MDH would have otherwise received under IPPS, the MDH receives an additional payment. In this case, the MDH additional payment is calculated as 75 percent of the difference between the HSR and the IPPS amount. If the IPPS amount were higher than the HSR, the MDH would receive no additional payment. (See fig. 1.) Hospitals with an MDH designation are also eligible to receive other benefits. For example, MDHs are eligible for a separate additional payment if the hospital experiences at least a 5 percent decline in inpatient volume due to circumstances beyond its control. The MDH program does not provide for additional payments for outpatient services. The MDH Program Differs from Other Medicare Rural Hospital Payment Designations in Terms of Eligibility Criteria, Financial Benefit, Legislative Permanence, and Relative Size In addition to the MDH designation, four other rural hospital designations exist: (1) critical access hospitals (CAH), (2) sole community hospitals (SCH), (3) low-volume adjustment hospitals (LVA), and (4) rural referral centers (RRC). Our review of CMS documentation shows that the MDH payment designation differs from the other rural payment designations in terms of eligibility criteria, financial benefit, extent of legislative permanence, and size—that is, the number of hospitals receiving the designation. (For detailed information on the five rural payment designations, see app. II.) Eligibility Criteria. The MDH designation differs from the other designations in terms of eligibility criteria. As noted earlier, MDHs must have at least 60 percent of their inpatient days or discharges attributed to Medicare patients, must be small and, with few exceptions, rural. In contrast, both the SCH and CAH designations require hospitals to be remote rural hospitals (i.e., located a specified distance from the nearest hospital). Similarly, LVAs are generally required to be more than 15 miles from the nearest hospital. Rural hospital designations also differ in terms of eligibility criteria related to bed size. CAH-designated hospitals are required to have 25 inpatient beds or fewer, while MDHs must have 100 beds or fewer. RRCs must have at least 275 beds or meet other criteria, such as serving a high proportion of remote patients, among other things. Financial Benefit. The MDH designation has a relatively small financial benefit compared to most of the other rural hospital designations, and the benefit only applies to costs associated with inpatient services. MDHs generally can only receive 75 percent of the difference between payment based on their HSR and the payment they would have otherwise received based on the IPPS rate as an additional payment added to their IPPS rate payment. In contrast, the SCH and CAH designations have both inpatient and outpatient payment benefits. Hospitals with an SCH designation can receive an additional payment added to their IPPS rate payment equal to 100 percent of the difference between payment based on the HSR and what the hospital would otherwise receive as payment based on the IPPS rate, as well as a 7.1 percent addition to their outpatient payments. The CAH designation results in the highest financial benefit by generally providing 101 percent of the hospital’s reported costs in the current year for both inpatient and outpatient Medicare services. LVAs generally can receive up to 25 percent in additional payments, and while RRCs receive no direct financial benefit, they are exempt from certain requirements related to geographic reclassification (as are SCHs). Legislative Permanence. Unlike all but one other rural payment designation, the MDH program is a temporary program and must be extended periodically by Congress in order to continue. Historically, the extension by Congress has sometimes occurred after the program has expired and as a result there were temporary lapses in payments to MDH designated hospitals. The Bipartisan Budget Act of 2018 included a provision to extend the MDH program through fiscal year 2022. The only other designation that must be extended is the LVA designation. In 2010, the Patient Protection and Affordable Care Act temporarily expanded the LVA designation eligibility criteria to include hospitals with a higher volume of discharges and located closer to other hospitals than in previous years. These expanded eligibility criteria have been amended and extended through fiscal year 2022. If Congress does not extend the expanded eligibility criteria beyond fiscal year 2022, the LVA designation will return to the narrower eligibility criteria that were in place prior to the Patient Protection and Affordable Care Act. Relative size and overlap. Of the 2,204 rural hospitals in fiscal year 2017, a relatively small share of these hospitals were MDHs. (See fig. 2.) In total, 138 hospitals, or 6.3 percent of those rural hospitals with at least one designation, were MDHs. In contrast, CAHs comprised the largest proportion of rural hospitals with a designation. In fiscal year 2017, 1,246 rural hospitals—or 56.5 percent of those rural hospitals with at least one designation—were CAHs. Of the five designations, three—CAHs, MDHs, and SCHs—are exclusive to each other, meaning a hospital can only have one of the three designations at any time. Hospitals designated as MDHs and SCHs may also be designated as LVAs, RRCs, or both. Approximately 75 percent of MDHs and 81 percent of SCHs had at least one concurrent designation in fiscal year 2017; in contrast, none of the CAHs received a secondary designation because CAHs are not eligible to receive other designations. Those MDHs with a concurrent designation consisted of 88 that had an LVA designation, 14 that had an RRC designation, and 2 that had both an LVA and RRC designation. (For detailed information on the 5 rural payment designations including LVA and RRC eligibility and financial benefit, see app. II.) The Number of MDHs Declined over Time, As Did the Inpatient Share of Medicare Revenue and Profit Margins From fiscal years 2011 through 2017, the number of MDHs declined, as well as the number of MDHs that received an additional payment under the program. In addition, during this period MDHs varied on other operational and financial metrics, including the share of Medicare revenue coming from inpatient care, various measures of Medicare dependence, and profit margins. From Fiscal Years 2011 through 2017, the Number of MDHs Declined by 28 Percent, and the Number of MDHs Receiving Additional Payments Decreased by 15 Percent Our analysis of CMS data shows that the number of MDHs declined from 193 to 138—a 28 percent decrease over the 7-year period from fiscal year 2011 through fiscal year 2017. (See fig. 3.) This decline can be due to a number of factors, including hospital closures, mergers, or changes in designation. For example, we previously reported that 16 MDHs closed between 2013 and 2017. Moreover, our review of Medicare Administrative Contractor documentation found that some MDHs became ineligible for the program due to no longer meeting eligibility criteria. In addition, the number of MDHs that received an additional annual payment also declined, from 92 MDHs in fiscal year 2011 to 78 MDHs in fiscal year 2017—a 15 percent decrease. Among MDHs that received an additional payment, the amount received and the share of the hospital’s total revenue this payment represented varied widely across the years, though the average amount generally increased over time. (See table 1.) For example, in fiscal year 2017, one hospital received around $1,000 in additional payment while another received almost $10.5 million. While the trend was not uniform among all MDHs, the median additional payment increased from about $695,000 in fiscal year 2011 to about $812,000 in fiscal year 2017. Our analysis of CMS data also shows that the average additional payment MDHs received ranged from less than 0.1 percent up to 8.7 percent of total facility revenue, with a fairly consistent average of 1.2 to 1.6 percent. (See table 2.) This underscores that the additional payment under the MDH program can be small relative to the overall revenue that the hospital receives. MDHs Varied over Time on Select Operational and Financial Metrics Our analysis of CMS data also shows that from fiscal years 2011 through 2017, MDHs varied on selected operational and financial metrics: the mix of Medicare revenue that came from inpatient versus outpatient care, various measures of Medicare dependence, and profit margins. Inpatient/Outpatient Mix On average, MDHs experienced a decline in the share of Medicare revenue that came from inpatient services. (See fig. 4.) In fiscal year 2011, around 66 percent of MDH Medicare revenue came from inpatient services compared to 58 percent in fiscal year 2017—a 13 percent decrease. This trend was slightly greater than that for all rural hospitals (an 11 percent decrease) and all hospitals (a 10 percent decrease). Measures of Medicare Dependence The trends across three measures of Medicare dependence varied for MDHs over time. Looking at the Medicare share of total revenue for MDHs, we found this share decreased when comparing fiscal years 2011 and 2017, from 25 to 22 percent. (See fig. 5.) In contrast, in terms of the number of inpatient days and discharges attributable to Medicare beneficiaries, we found these measures both increased slightly over time. Specifically, the median share of MDH inpatient days attributable to Medicare beneficiaries increased, although by less than a percentage point, and the median Medicare share of inpatient discharges increased by about 2 percentage points, when comparing fiscal years 2011 and 2017. (See figures 6 and 7.) To obtain additional context on the relationship between MDH eligibility criteria and the various measures of Medicare dependence, we ran regression models to identify the extent to which hospitals’ bed size and rural status were associated with the Medicare share of days, discharges, and total care revenue for all hospitals from fiscal years 2011 through 2017. We found that rural hospitals with fewer beds were associated with higher Medicare shares of inpatient days and discharges, holding all other factors constant. This indicates that by targeting smaller, rural hospitals in its eligibility criteria, the MDH program is targeting hospitals that are Medicare-dependent defined in terms of inpatient volume. At the same time, rural hospitals with fewer beds generally received a smaller share of their total care revenue from Medicare compared with other hospitals. This suggests that hospitals associated with high Medicare inpatient volume may not have relatively high shares of total care revenue coming from Medicare. For more technical detail on our regression analyses and findings, see appendix III. Profit Margins Our analysis of self-reported data from hospitals shows that Medicare profit margins and total facility profit margins declined for MDHs from fiscal year 2011 through 2017. (See table 3.) The degree to which Medicare margins declined for MDHs during this time period (6 percentage points) was greater than the degree to which they declined for rural hospitals (4 percentage points) and all hospitals (3 percentage points). The self-reported data show that unlike rural and all hospitals, MDHs were not profitable in 2017—meaning that the revenue they received from Medicare and other payers was less than their reported costs for providing services. Specifically, the total facility profit margin turned from positive to negative and dropped almost two percentage points between fiscal years 2011 and 2017. We also ran regression models to examine the relationship between all hospitals’ total profit margins and the various measures of Medicare dependence. We found that hospitals with a higher Medicare share of total-care revenue had lower total facility margins on average, holding all other factors constant; in contrast, there was no significant relationship between total facility margins and the inpatient volume-based measures of Medicare dependence. This indicates that a higher volume of inpatient services was not associated with lower profitability. Agency Comments We provided a draft of this report to the Department of Health and Human Services for comment. The Department of Health and Human Services provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of the Department of Health and Human Services. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Scope and Methodology This appendix explains the quantitative scope and methodology used to examine how the Medicare-dependent hospital (MDH) designation differs from the other Medicare rural hospital designations. This appendix also explains the scope and methodology used to describe changes in the number and selected metrics of MDHs and other hospital types, including those used for a regression analysis to provide information on the relationship between MDH program criteria and Medicare dependence. Differences between MDH and Other Designations To describe how the MDH designation differs from other rural hospital designations, we used CMS data—specifically, the Provider Specific File (PSF)—to identify the number of MDHs, critical access hospitals (CAH), sole community hospitals (SCH), rural low-volume adjustment hospitals (LVA), and rural referral centers (RRC) in fiscal year 2017. We then identified all rural hospitals without a designation in 2017 using the 2018 CMS Inpatient Prospective Payment System (IPPS) Impact File because those data are prepared in the middle of the year preceding the fiscal year. We define rural hospitals using the CMS MDH programmatic definition; that is, those hospitals that are not located in metropolitan statistical areas, as well as those hospitals that reclassified as rural for CMS payment purposes. We next identified the number of hospitals with each designation and the value of additional payments received under the rural designations that each hospital had in that year using data provided by each hospital through their Medicare Cost Report (MCR). The MCR is submitted to CMS by hospitals each fiscal year and contains information such as facility characteristics, utilization data, and costs to provide services to Medicare beneficiaries and all patients. Because CAHs are paid based on cost under a different payment system than the other hospitals, we did not have complete data to estimate what those hospitals would have been paid under the inpatient prospective payment system and thus could not identify the additional payments received by CAHs. In addition, RRCs only receive indirect payment benefits, and thus we could not calculate a comparable additional payment for that group of hospitals. For all analyses, we excluded hospitals within the Indian Health Service, as well as hospitals in Maryland and those outside of the remaining 49 states and the District of Columbia. We also excluded hospitals with reporting periods greater than 14 months or less than 10 months and those that reported zero or negative Medicare revenue. Number of MDHs and Selected Metrics To describe changes in the number and select metrics of MDHs and other hospital types, we examined MCR data for fiscal years 2011 through 2017. To first identify the universe of MDHs, rural hospitals, and all acute care inpatient prospective payment system (IPPS) hospitals, we used the PSF and MCR for fiscal years 2011 through 2017, as well as CMS Impact Files for fiscal years 2012 through 2018. Then, we used the MCR to calculate the number of MDHs that received the MDH payment adjustment and the distribution of additional payments among MDHs in each year. Using those same data sources, we then calculated several metrics and examined trends for MDHs as compared to all rural hospitals and all hospitals overall. The first metric is the median proportion of total Medicare payments—referred to as revenue—each hospital group received from providing inpatient and outpatient care to Medicare beneficiaries. The second metric is hospitals’ median profit margins—a profitability measure calculated as the amount of revenue the hospital received minus reported costs, divided by the amount of revenue received. We calculated profit margins specific to Medicare revenue and costs (Medicare profit margins) but also for revenue and costs beyond Medicare (total facility profit margins), including payments for treating non-Medicare (including privately insured) patients. We calculated Medicare and total facility profit margins at the hospital level using hospital-reported costs and revenues from the MCRs, and reported the median margins for each hospital group. The Medicare margin reflects only payments and costs received for inpatient and outpatient services (about 90 percent of total Medicare revenue, according to CMS officials) and excludes payments and costs for other hospital-based services, such as those for skilled nursing and home health care. Third, we calculated hospitals’ degree of Medicare dependence using three separate definitions, or measures, of dependence: (1) the amount of revenue the hospital received from Medicare as a share of all the revenue the hospital received for inpatient and outpatient services (total care revenue), (2) the share of inpatient days of care the hospital provides that are attributed to Medicare beneficiaries, and (3) the share of inpatient discharges that are attributed to Medicare beneficiaries. We also calculated these metrics separately for those MDHs that were eligible for the program based on data from the 1980s—legacy MDHs. To do so, we used data provided by Medicare Administrative Contractors—third-party entities that administer Medicare program payments and determine MDH eligibility. Regression Analysis To provide additional context on the relationship between MDH eligibility criteria and the various definitions of Medicare dependence, we developed an econometric model to analyze the association between bed size, rural status, and the three measures of Medicare dependence. We conducted the regression analysis using data from the CMS IPPS Impact Files and MCRs from fiscal years 2011 through 2017. We used the following measures as dependent variables: (1) the amount of revenue the hospital received from Medicare as a share of all the revenue the hospital received for inpatient and outpatient services (total care revenue), (2) the share of inpatient days of care the hospital provides that are attributed to Medicare beneficiaries, and (3) the share of inpatient discharges that are attributed to Medicare beneficiaries. Dependent Variables 𝑌𝑌𝑖𝑖𝑖𝑖=log (𝑅𝑅𝑖𝑖𝑖𝑖) . Where 𝑅𝑅𝑖𝑖𝑖𝑖 represents the Medicare share of revenue, inpatient days or discharges, and the i and t subscripts represent the hospital and year, respectively. This formulation has the advantage of restricting the models’ predicted values to be positive and also allows for a relatively straightforward interpretation of the parameter estimates. Explanatory Variables We included hospital capacity or size as measured by the number of hospital beds. The number of beds is itself one of the criteria for MDH eligibility, and we were interested in whether hospitals of smaller sizes have more or less Medicare dependency. We included an indicator variable flagging whether the hospital is in a rural location. Rural location is one of the criteria for MDH program eligibility, and so this was a key variable in our model. We included the ownership category of the hospital, such as whether a hospital is for-profit or not for-profit, or whether it is a public or private institution. This organizational category may determine institutional characteristics, which affects the likelihood that the hospital serves either more or fewer Medicare beneficiaries. We included the degree of proximity to other hospitals of substantive size; specifically, the distance from the closest hospital with at least 100 beds. In addition to our rural indicator variable, this controlled for whether more remote hospitals are more likely to be more dependent on Medicare. We included whether the state in which the hospital is located has expanded Medicaid to provide coverage to low-income, non-elderly adults, because it is possible that an increased number of Medicaid- eligible patients may affect the number of Medicare patients using hospital services. This variable may be associated with less Medicare dependence if Medicaid becomes a relatively larger payer source, or it may be associated with more Medicare dependence if Medicaid eligibility brings Medicare-eligible people into the health care system. We included the percent of population in the hospital’s county over age 65, because areas with larger numbers of people over age 65 may be more likely to have a higher proportion of Medicare beneficiaries using health care services. We included the percent growth in county population, which allowed us to control for areas with declining populations that may be more likely to contain Medicare-dependent hospitals. Our model included time fixed effects (a dummy variable for each year in the analysis). The time fixed effects controlled for factors affecting hospitals nationally in as given year—in particular, those factors for which data were unavailable. We included a set of state fixed effects (a dummy variable for each of the states in the analysis) to control for effects that are common to a specific area, but for which data may have been unavailable. We estimated specifications that included interactions between our bed size categories and rural location. This allowed us to determine whether bed size had the same impact on Medicare dependence for hospitals in rural locations compared with those in urban locations. Model Specification ln (𝑅𝑅𝑖𝑖𝑖𝑖)=�𝑓𝑓𝑖𝑖𝐹𝐹𝑖𝑖 +𝑋𝑋𝑖𝑖𝑖𝑖𝛽𝛽+𝐶𝐶𝑠𝑠𝑖𝑖𝛾𝛾+𝜀𝜀𝑖𝑖𝑖𝑖,𝑡𝑡=1,…,𝑇𝑇; 𝑖𝑖=1,…,𝐻𝐻. The dependent variable is the logarithm of our measure of Medicare 𝑋𝑋𝑖𝑖𝑖𝑖 is a 1 x k vector of hospital characteristics and possible 𝛽𝛽 is a k x 1 vector of parameters associated with the hospital interactions of these characteristics, where i denotes the ith hospital ∑ ∑ 𝐶𝐶𝑠𝑠𝑖𝑖 is a 1 x m vector of time-varying county-level variables hospital and their associated (lower case) parameters. 𝛾𝛾 is an m x 1 vector of parameters associated with the state-level characteristics such as the percent of the population over 65 and the dependence, 𝑅𝑅𝑖𝑖𝑖𝑖. and t denotes the year. 𝑋𝑋𝑖𝑖𝑖𝑖 contains key explanatory variables such as characteristics, 𝑋𝑋𝑖𝑖𝑖𝑖. ownership type, the number of beds, rural or urban location, whether 𝑇𝑇𝑖𝑖=2a hospital receives MDH program monies and other characteristics. represents the set of time (year) dummy variables (upper 𝑆𝑆𝑠𝑠=2 represents the set of state dummy variables (upper case) case) and their associated (lower case) parameters. characteristics such as the percent of the population over 65 and the county population growth rate. characteristics, 𝐶𝐶𝑐𝑐𝑖𝑖. Our model includes an interaction effect between the rural dummy variable and each of the characteristics except the geographic fixed effects. 𝜀𝜀𝑖𝑖𝑖𝑖 is a well-behaved Gaussian random error term that may have a heteroskedastic and/or clustered structure. We used Stata® to estimate the regression model, using fixed effects at the state-level to account for unobserved heterogeneity and clustering at the county-level. Specification of the Bed Size Categories and Geographic Fixed Effects Our focus was on the main criteria for MDH eligibility—namely hospital size as measured by number of beds and rural versus no-rural hospital location. We divided the hospitals into five bed number categories: 50 beds or fewer Over 50 beds to 100 beds Over 100 beds to 300 beds Over 300 beds to 400 beds This categorization strikes a balance between having too many categories, which would reduce the statistical power of our analysis, and having too few categories, which would fail to identify any non-linear pattern in the statistical relationship. These categories also contain the 100 bed criterion as one of the cut-off points. Our analysis controls for location and possible heterogeneity by using geographic fixed effects but we also want to identify the impact of rural location. Selecting too detailed a level of geographic fixed effect such as county or zip code would limit our ability to identify the rural effect so we used states. We recognized that state fixed effects may not identify more localized effects; this is a limitation of our model. Total Facility Profit Margins and Measures of Medicare Dependence We also modeled the effects of hospital characteristics on total facility profit margins; that is, the difference between revenue and costs as a percent of revenue. For MDHs in our analysis, we excluded any MDH additional payment from the margin calculation in order to isolate and remove the program impact on financial status. We used the same explanatory factors in our econometric model of hospital margins as in our models of Medicare dependence but we supplement these factors with our three measures of Medicare dependence—a separate model for each measure. This allowed us to assess how our different measures of Medicare dependence are associated with financial well-being. We assessed the reliability of the relevant fields in each of the data sets we used for these analyses by interviewing CMS officials, reviewing related documentation, and performing data checks. On the basis of these steps, we concluded that the data were sufficiently reliable for the purposes of our reporting objective. Appendix II: Medicare Rural Hospital Payment Designation Eligibility and Payment We identified five Medicare rural hospital payment designations and categorized them into two categories: (1) primary payment designations and (2) secondary payment designations. Primary designations include critical access hospitals (CAH), sole community hospitals (SCH), and Medicare-dependent Hospitals (MDH). Each designation has distinct eligibility requirements and payment methodologies. Appendix III: Full Regression Results This appendix describes the full results for our modeling of Medicare dollars as a percentage of total revenue, the percent of inpatient days, the percent of inpatient discharges, and total hospital profit margins. Results for Modeling Medicare Revenue as a Share of Total Revenue We tested for the hypothesis that key groups of parameters were significantly different between urban and rural locations. We performed a k-parameter post-estimation Wald linear restriction where 𝛽𝛽𝑘𝑘𝑢𝑢 and 𝛽𝛽𝑘𝑘𝑟𝑟 are matrices of the estimated urban and rural parameters, respectively, for each of the k categories (bed-size, ownership type, etc.). We rejected the null hypothesis of parameter equality for bed-size, ownership types, Medicaid expansion, and year dummies at the 5 percent level. The miles distance parameters rejected the hypothesis at marginally above the 5 percent level. Rural hospitals generally were associated with larger Medicare shares of revenue than urban hospitals. In every bed-size category, the parameters for rural hospitals were significantly greater than for urban hospitals. In addition, controlling for urban-rural location, with the exception of the largest hospital category (over 400 beds) hospitals with fewer beds had a smaller Medicare share of revenue, as shown in figure 8. Hospitals in counties with higher percentages of people over age 65 were significantly associated with greater Medicare dependence. Results for Modeling Medicare as a Share of Total Inpatient Days Our Wald tests rejected the null that parameters rural and urban were equal in the bed-number categories and in the ownership categories. As with the Medicare share of total revenue, our model for Medicare share of inpatient days showed, controlling for bed numbers, that rural hospitals generally had significantly greater Medicare dependence than urban hospitals. In most bed-size categories, the parameters for rural hospitals were greater than for urban hospitals. The pattern for bed size was different for Medicare dependence measured in revenue in that for rural hospitals, dependence fell as bed numbers rose, but, for urban hospitals, we observed a hump- shape distribution with the middle bed-number categories having higher dependence than the smallest and largest categories, as shown in figure 9. Hospitals located in counties with higher percentages of people over age 65 had higher dependence. Results for Modeling Medicare as a Share of Total Inpatient Discharges Our model for the Medicare share of inpatient discharges showed that, controlling for bed numbers, rural hospitals generally had greater Medicare dependence than urban hospitals. In most bed-size categories, the parameters for rural hospitals were significantly greater than for urban hospitals. Our Wald tests rejected the null hypothesis that parameters for rural and urban were equal in the bed-size categories, Medicaid expansion variables, and in the ownership categories. The pattern for bed numbers was also different to Medicare dependence measured in revenue. The urban hospitals had a hump- shape distribution with the middle bed-number categories having higher dependence than the smallest and largest categories, whereas the rural showed largest effects at the smallest and the larger intermediate categories, as shown in figure 10. Hospitals located in counties with higher percentages of people over age 65 had higher dependence. Results for Modeling Hospital Profit Margins The Medicare share of total revenue was significantly associated with smaller total facility profit margins and was the only statistically significant measure of Medicare dependence in the margin models. In general, hospitals with small numbers of beds—fewer than 100—were associated with smaller hospital margins relative to our reference category of large urban hospitals. However, there was no significant difference in any of the bed-number categories between urban and rural hospitals. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Jessica Farb, (202) 512-7114 or farbj@gao.gov In addition to the contact named above, Gregory Giusto (Assistant Director), Kate Nast Jones (Analyst-in-Charge), Britt Carlson, Rachel Gringlas, Michael Kendix, Vikki Porter, Caitlin Scoville, Jennifer Rudisill, and Jeffrey Tamburello made key contributions to this report.
Why GAO Did This Study The MDH program was enacted in 1989, providing a financial benefit to some small, rural hospitals with high shares of Medicare patients. The original MDH program was established through statute for 3 years, and Congress has extended it on several occasions. The Bipartisan Budget Act of 2018 included a provision to extend the MDH program through 2022, as well as a provision for GAO to review the MDH program. This report describes, among other things, the changes that occurred in the number of MDHs and selected metrics over time. GAO analyzed data submitted to CMS by hospitals from fiscal years 2011 through 2017—the most recent year for which consistent data were available at the time of GAO's analysis—among other CMS data. GAO also reviewed CMS regulations and other agency documents. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found The Centers for Medicare & Medicaid Services (CMS) operates the Medicare-dependent Hospital (MDH) program, which assists hospitals that have 60 percent or more of inpatient days or discharges from Medicare patients, 100 or fewer beds, and that are generally located in a rural area. MDHs receive an additional payment if their historic costs in one of three base years adjusted for inflation, among other things, are higher than what the hospital would have otherwise received under the inpatient prospective payment system (IPPS). In contrast, if the IPPS amount was higher than historic costs, the MDH would receive no additional payment. In fiscal year 2018, CMS paid approximately $119 million in additional payments to MDHs. From fiscal years 2011 through 2017, the number of MDHs declined by around 28 percent. (See figure.) In addition, the number of MDHs that received an additional payment declined by around 15 percent. Over this period of time, MDHs also experienced a 13 percent decrease in the share of their Medicare revenue that came from inpatient services. In addition, there was a decline in the share of total MDH revenue that was attributed to Medicare patients, and a decline in Medicare profit margins by about 6 percentage points.
gao_GAO-19-249
gao_GAO-19-249_0
Background Executive Branch Ethics Program The Ethics in Government Act of 1978 was enacted to preserve and promote the accountability and integrity of public officials, and the institutions of the federal government. The act requires political appointees and high-ranking government officials to complete a public financial disclosure report to help prevent and mitigate conflicts of interest for the purpose of increasing public confidence in the integrity of government. The act also established restrictions on postemployment activities of certain employees, and created OGE. The primary mission of the executive branch ethics program is to prevent conflicts of interest on the part of executive branch employees. The executive branch ethics program is a shared responsibility across government (see figure 1). OGE is the supervising ethics office for the executive branch and sets policy for the entire executive branch ethics program. Executive branch agency heads are responsible for leading their agency’s ethics program. Agency leaders are ultimately responsible for their organizations’ ethical culture. Their actions can demonstrate the level of commitment to ethics and set a powerful example for their employees. Designated Agency Ethics Officials (DAEO) and other agency ethics staff carry out ethics program responsibilities and coordinate with OGE. Inspectors General and the Department of Justice are authorized to investigate potential violations of criminal statutes pertaining to ethics. Executive branch employees are individually responsible for understanding and complying with the requirements of ethics laws and regulations, and are collectively responsible for making ethical conduct a standard of government service. Ethics Laws for Executive Branch Employees Executive branch employees are ultimately responsible for understanding and abiding by the various ethics laws. Generally, executive branch employees are prohibited from working on government matters that will affect their personal financial interest or the financial interests of a spouse or minor child; general partner; any organization in which they serve as an officer, director, or trustee; and any person or organization with whom they are negotiating or have an arrangement for future employment. Executive branch employees are also subject to criminal statutes prohibiting bribery and illegal gratuities; civil statutes requiring public financial disclosure; and employee standards of conduct, such as acting at all times in the public’s interest, serving as good stewards of public resources, and refraining from misusing their office for private gain. Agency Offices of Inspectors General (OIG) have a responsibility to investigate potential ethics violations. Among our three case study agencies, since January 2017, the HHS and Interior OIG have investigated potential travel and ethics issues involving political appointees while the SBA OIG did not initiate any similar investigations. The HHS OIG investigated the former Secretary of HHS’s use of chartered and commercial aircraft and found that it did not always comply with applicable federal travel regulations and HHS policies and procedures. In response to its OIG’s findings, HHS implemented additional steps for political appointees’ travel approval. Since January 2017, the Interior OIG has initiated five investigations into potential ethics violations involving the former Secretary of the Interior. As of March 1, 2019, three investigations related to the former Secretary were completed. As a result of the first completed investigation, the Interior OIG found that “incomplete information” about the former Secretary’s travel and use of chartered flights during 2017 was provided to the DAEO for review. The other two completed investigations found no evidence that the former Secretary violated ethics laws. Two investigations remained open as of March 2019. Interior’s DAEO described multiple strategies that were implemented to address issues observed within the ethics program after he was hired in April 2018, such as establishing weekly meetings with the former Secretary of the Interior to discuss ethics matters. Executive Branch Political Appointees Executive Branch political appointees are subject to more ethics restrictions than other executive branch employees. Appointees make or advocate policy for a presidential administration or support those positions. Appointees generally serve at the pleasure of the appointing authority and do not have the civil service protections afforded to other federal employees. There are four major categories of political appointees: Presidential Appointees with Senate confirmation (PAS); presidential appointees; noncareer employees in the Senior Executive Service (SES); and Schedule C employees. The most recent Plum Book, which was published on December 1, 2016, identified about 4,000 political appointee positions from these four major categories across the entire executive branch as of June 30, 2016 (see figure 2). The Plum Book identifies presidentially appointed positions within the federal government using data from the Office of Personnel Management. It is published every 4 years just after the presidential election, alternately, by the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform. In addition to the ethics laws for executive branch employees, several recent presidential administrations have issued an order requiring political appointees in executive branch agencies to sign an ethics pledge. Some of the restrictions in the ethics pledge relate to areas already covered under existing ethics provisions, such as restrictions on accepting gifts and postemployment restrictions. Political appointees may receive an ethics pledge waiver from the President or his designee of certain or all ethics restrictions and authorizations enabling them to participate in otherwise prohibited activities. Political appointees that sign the pledge are contractually bound to adhere to its restrictions. If violated, the restrictions in the pledge could only be enforced through civil actions. Transparency and Ethics To foster transparency, federal law permits members of the public to access various government records. OGE provides online access to certified copies of public financial disclosure reports for PAS and certain other executive branch employees, as well as any applicable ethics agreements, certification of compliance for the ethics agreement, and certificates of divestiture for PAS. OGE also provides online access to copies of ethics pledge waivers for appointees at agencies. Members of the public can use this information to assist in holding government officials accountable for carrying out their duties free from conflicts of interest. No Single Source of Data on Political Appointees Exists That Is Comprehensive, Timely, and Publicly Available OPM, PPO, and two nongovernmental organizations provide some data on political appointees serving in the executive branch, but the data have limitations that impede their usefulness. The Senate Homeland Security and Governmental Affairs Committee and the House Oversight and Government Reform Committee publish OPM data on political appointees after each presidential election in the Plum Book. Data include name, title, type of appointment, salary, and location of employment. The data reflect the positions and the individuals who are filling the positions at a single point in time, about 5 months prior to the report’s publication. While the data are comprehensive and publicly available, they are not timely. Because the Plum Book is a snapshot in time, it does not reflect changes that occur in between publications, such as changes to who is holding a certain position, the position title, and vacancies. OPM also maintains more timely data on federal personnel; however, these data are not comprehensive or publicly accessible for identifying individuals serving in political appointee positions. OPM maintains data in the Executive and Schedule C System and the Enterprise Human Resources Integration (EHRI) system—the latter serves as OPM’s primary repository for human capital data. We found both systems have limitations, several of which were also identified by OPM officials. The Executive and Schedule C System is not comprehensive. It includes data on Schedule C and noncareer SES political appointees, but generally does not include data on presidential appointees or PAS. Publicly available EHRI data do not identify political appointees, either at the individual or group level. In addition, the EHRI source data is not publicly available. Political appointees can be identified from a combination of multiple variables, but these combinations are not consistent within or across appointee types. OPM provided some data on political appointees serving in the executive branch as of June 2018 from the Executive and Schedule C System. We reviewed the data and found errors and omissions. For example, we found instances in which individuals appeared to be holding political appointee positions that they departed several months prior and individuals known to currently hold political appointee positions were not identified. We also found that the data are incomplete. For example, the data did not include information on political appointee positions within the EOP. The EOP provides data to OPM only every 4 years for inclusion in the Plum Book. In addition to OPM, the White House maintains timely data on political appointees that are likely more comprehensive than OPM’s data but are not publicly available. Historically, PPO maintained data on political appointees as part of its responsibilities to recruit, vet, and place political appointees in positions across the government. PPO data on political appointees have not been made publicly available by the Trump, Obama, or Bush administrations. According to former officials from the Bush and Obama administrations, PPO maintained and used data on political appointees to carry out its responsibilities. For example, during the Obama administration, PPO established a database to help with filling political appointee positions and managing the overall appointee process. The database included preliminary information on candidates, such as names, application status, and where the applicant was in the vetting process. After a position was filled, the database tracked information such as the name of the appointee, position, federal department or agency, and start and departure dates. The primary limitation of the data was that departure dates of political appointees were unreliable. The former Obama administration official attributed this limitation to the lack of a process for agencies to formally notify PPO when an appointee left a position. To address this gap, PPO met regularly with staff in federal agencies to review data for accuracy. There are requests by members of the public to obtain data on political appointees serving in the executive branch. For example, between January 2017 and November 2018, OPM received approximately 32 requests through the Freedom of Information Act (FOIA) for data on political appointments across federal agencies. According to OPM officials, requests for data on political appointees are common and tend to increase at the start of a new administration. Former PPO officials also stated that when they served at PPO they received requests for data on political appointees serving in the executive branch. In the absence of comprehensive and timely data on political appointees serving in the executive branch, two nongovernmental organizations—the Partnership for Public Service and ProPublica—stated that they collect and report some data themselves. The Partnership for Public Service primarily tracks and reports data on PAS appointments, which are compiled from publicly available sources such as Congress.gov and agency websites. According to the Partnership for Public Service, accurately tracking departure dates is the most significant limitation. Some PAS departures, such as cabinet level officials, are typically reported in the media; however, lower-level PAS departures may not be reported. ProPublica collects and reports data on all types of political appointees serving in the executive branch. To obtain and compile its data, ProPublica makes FOIA requests to OPM and departments and agencies across the executive branch for political appointee staffing lists. ProPublica also makes requests for other data, such as financial disclosure forms through an administrative process required by the Ethics in Government Act of 1978. ProPublica said it has had more than 166,000 unique visitors to its database since it launched in March 2018. According to officials at ProPublica, one limitation is that they rely on agency responses to FOIA requests and therefore the data may not be comprehensive or timely. The public has an interest in knowing who is serving in the government and making policy decisions. The Office of Management and Budget (OMB) stated that transparency promotes accountability by providing the public with information about what the government is doing. In a 2009 memorandum, OMB directed agencies to make information available online and to use modern technology to disseminate useful information, rather than waiting for specific requests under FOIA. Although some data are publicly available on political appointees and FOIA requests can be used to varying effect to obtain data on political appointees, neither option results in comprehensive, timely, and publicly available data. Until the names of political appointees and their position, position type, agency or department name, start and end dates are publicly available at least quarterly, it will be difficult for the public to access comprehensive and reliable information. Making such information available would promote transparency. The public, including independent researchers, the media, and nongovernmental organizations, can use these data to perform independent analyses to identify gaps and challenges for filling political appointee positions or to identify potential conflicts of interest. Such analyses would also facilitate congressional oversight of executive branch appointees by providing a comprehensive and timely source of information on political appointees. As of March 2019, no agency in the federal government was required to publicly report comprehensive and timely data on political appointees serving in the executive branch. As the leader of human resources and personnel policy, OPM is positioned to collect, maintain, and make political appointee data publicly available on a frequent and recurring basis. However, OPM is limited in its ability to provide comprehensive data, in part because it does not regularly receive data from each agency that has political appointees, such as the EOP, which has approximately 225 political appointee positions based on the 2016 Plum Book. PPO is positioned to make more comprehensive data on political appointees publicly available. However, PPO is reestablished with each new presidential administration, which could be a barrier to establishing a consistent process for maintaining and publishing data on a recurring basis. Ultimately, it is a policy decision as to which agency is best positioned to report comprehensive and timely data on political appointees. SBA and Interior Ethics Programs Did Not Meet All Documentation Requirements and Interior and HHS Had Workforce Continuity Challenges All three agencies we reviewed—HHS, Interior, and SBA—generally used appropriate internal controls to ensure they met basic ethics program requirements, such as financial disclosure, though two of the agencies— Interior and SBA—could do more to strengthen their ethics programs. SBA and Interior had not fully documented some of their procedures for ethics training and the ethics pledge, respectively. In implementing their ethics programs, each agency addressed human capital issues and workforce continuity challenges; however, we found that vacancies and staff turnover had negative effects on Interior’s ethics program. For the full results of our assessment of agencies’ internal controls, see appendix II. Reviewed Agencies Generally Met Basic Requirements for Financial Disclosure and Ethics Training, but Interior and SBA Did Not Document Some Procedures Financial Disclosure All three agencies we reviewed met the minimum statutory and regulatory requirement to have written procedures for financial disclosure. Federal law requires agencies to develop written procedures to collect, review, and evaluate financial disclosure reports (see sidebar). Each agency established financial disclosure processes in addition to what is required to reduce the risk of political appointees performing agency work while they may have conflicts of interest. For example, prior to an HHS political appointee’s first day, the HHS process requires the appointee’s financial disclosure report to be submitted and reviewed, and any potential conflicts be either resolved or identified, and an ethics agreement put in place with a timeline for conflict of interest resolution. This process aims to ensure that appointees are in compliance with ethics laws and regulations when they begin government service, rather than 30 days or more into their appointment. File a new entrant public financial disclosure report within 30 days of assuming a public filing position. If appointed to a position requiring Senate confirmation, file a nominee report within 5 days of transmittal of the President’s nomination to the Senate for confirmation. File a termination report within 30 days of leaving office. HHS and SBA have additional processes that include written procedures which reflect OGE’s guidance for reviewing reports, such as following up with appointees when a financial disclosure report appears incomplete. OGE officials told us that engaging with an appointee during the review process allows agencies to confirm that the appointee understands and completes each required item. These interactions are also an opportunity to provide ethics counseling and establish a relationship with appointees who may be new to government service. Interior instituted a process in June 2018 that requires ethics officials to interview new appointees, review their financial disclosure report, and complete a financial disclosure checklist prior to certification. In reviewing a nongeneralizable sample of political appointees at each of the three agencies, we found that nearly all political appointees filed financial disclosure reports on time, with four exceptions of non-PAS appointees from our Interior and SBA samples (see table 1). In one case, an Interior appointee who was required to file both a new entrant and termination report did not do so. According to Interior ethics officials, the office mistakenly determined that the appointee was excluded from public filing requirements. An individual who does not serve more than 60 days in a calendar year is not required to file a new entrant or a termination financial disclosure report; however, this political appointee served for 63 days. Three appointees—two from SBA and one from Interior—filed new entrant reports past their due dates. Late filing heightens the risk of appointees performing agency work while having conflicts of interest; however, none of the three appointees filed more than 30 days after the due date or the last day of an extension, and therefore were not subject to a late filing fee. For example, one Interior appointee received a 30-day extension to file a new entrant report, but filed it 4 days late. One SBA appointee received an extension exceeding the maximum time—90 days—that an agency may grant to any filer and consequently filed 2 days late. According to SBA ethics officials, the appointee was given a 92-day extension because the due date was miscalculated. A second SBA appointee filed a report 1 day past the due date. We did not find timeliness issues with any reports filed by appointees at HHS or filed by PAS appointees at Interior or SBA. Agency ethics officials generally reviewed appointees’ financial disclosure reports in a timely manner. However, agencies followed up with non-PAS political appointees’ to varying degrees when their financial disclosure reports were potentially missing information. For example, SBA followed up with an appointee to confirm that the appointee had not inadvertently omitted information, such as a retirement plan, from the financial disclosure report because the appointee reported having previous long- term employment. HHS asked for and received clarifying information from an appointee who reported compensation for legal work but did not report individual clients. However, Interior ethics officials told us they did not follow up with two appointees in our sample who reported having no previous outside employment. Interior officials acknowledged that the reports were neither reviewed nor certified properly. According to Interior’s new Designated Agency Ethics Official (DAEO), the June 2018 update to Interior’s review process was implemented in response to deficiencies within its financial disclosure program. Ethics Training HHS and Interior had written procedures for initial ethics training as required, but SBA did not until February 2019. Federal regulation requires agencies to establish written procedures for providing initial ethics training beginning in January 2017 (see sidebar). Carry out an ethics education program to teach employees how to identify government ethics issues and obtain assistance in complying with ethics laws and regulations. Establish written procedures, which the DAEO must review each year, for providing initial ethics training. HHS’s and Interior’s written procedures reflect the requirements of initial ethics training. For example, both agencies’ procedures describe time frames for providing initial ethics training to political appointees no later than 3 months after their appointment date, as well as the method for doing so. Prior to February 2019, SBA did not have adequate written procedures in place to address the requirement that became effective in January 2017. SBA’s written procedures now reflect the requirements of initial ethics training. Now that SBA officials have formally documented procedures, they can have reasonable assurance that the procedures are implemented as intended and that all required appointees are provided initial ethics training. Interior’s and HHS’s ethics programs track and maintain documentation of dates that political appointees received initial ethics training. During the time of our review, SBA did not adequately document political appointees’ training dates. For example, ethics officials at Interior manually record training dates in a spreadsheet shared between Interior’s ethics office, Office of Human Resources, and the White House Liaison. HHS requires appointees to confirm in writing that they completed initial ethics training. According to SBA ethics officials, the previous Alternate DAEO informally documented the dates that political appointees received training in her personal notes. Standards for internal control state that management should document significant events, and that documentation and records should be properly managed, maintained, and readily available for examination. Allowing one individual to control all key aspects of documenting an event puts the program at risk of errors. As of February 2019, SBA officials had developed a tracking sheet and a certificate for appointees to sign that indicates they completed initial ethics training. We plan to assess the implementation of the tracking sheet to confirm that SBA is using the tracker to hold appointees accountable by documenting their completion of initial ethics training requirements. By developing and implementing a mechanism, such as a tracking sheet, SBA can have reasonable assurance that political appointees meet the requirement to take initial ethics training. Our review of agency documentation, including SBA’s informal documentation, found that political appointees completed required initial ethics training on time. Also, all three agencies provided the required additional live ethics briefing for PAS appointees together with initial ethics training. In addition to required training, all three agencies provided examples of other ways they have reminded appointees about their personal ethical responsibilities. For example: In advance of the holiday season, Interior provided supplementary training to political appointees on restrictions on accepting gifts. SBA used its agency-wide newsletter during the March Madness college basketball tournament to remind employees that they are prohibited from gambling in the workplace. HHS updated its ethics website to highlight Hatch Act rules in preparation for upcoming elections. Ethics Pledge Political appointees we reviewed at each agency had signed the required ethics pledge prescribed in Executive Order 13770, “Ethics Commitments by Executive Branch Appointees.” However, nine Interior appointees’ and one HHS appointee’s pledges were not timely signed. For example, the former Secretary of the Interior signed the pledge 19 days after his appointment. According to an Interior ethics official, the political appointees were directed to sign the pledge at the start of their appointments, but did not do so. Interior’s new DAEO told us in October 2018 that Interior now requires all appointees to sign the pledge on their first day as a condition of continuing their employment; however, this procedure has not been formally documented. The non-PAS HHS appointee signed the pledge 9 days after his permanent appointment date. While the restrictions under the pledge are enforceable by civil action, there are no legal consequences, such as fines or penalties, for failing to timely sign the pledge. for all appointees, a 2-year ban on involvement in “particular matters” involving former employers and clients; for former lobbyists, a 2-year ban on involvement on particular matters on which he or she lobbied; and for appointees who leave government service, a 5-year ban on lobbying agencies in which they served. The President or his designee may grant a waiver of any of the restrictions contained in the executive order. As of March 2019, 32 executive branch appointees—not including White House appointees— received limited waivers of the pledge. Interior’s then acting solicitor and principal deputy solicitor signed a limited waiver of certain restrictions on lobbying activities for one appointee in our sample upon the appointee’s departure from the agency in July 2017. However, according to Interior ethics officials, the official from the Solicitor’s Office did not have authority to grant a waiver. Furthermore, Interior’s ethics office was not included in the decision to grant the waiver, although Interior ethics officials ultimately notified the appointee when they became aware that the waiver was legally invalid. According to the DAEO, Interior is updating and documenting its ethics program processes and procedures, including new processes to sign ethics pledges and grant waivers, but did not provide a time frame for completion. We discuss Interior’s efforts to document overall ethics program processes and procedures later in this report. Reviewed Agencies’ Ethics Programs Face Human Capital and Workforce Continuity Challenges We found that all of the agencies we reviewed are addressing human capital issues and workforce continuity challenges to varying extents to achieve the goals and objectives of the ethics program. Standards for internal control state that management can help ensure operational success by having the right personnel for the job on board and maintaining a continuity of needed skills and abilities. Standards for internal control also state that management has a responsibility to obtain the workforce necessary to achieve organizational goals. HHS and Interior reported challenges to recruiting and retaining ethics staff with the necessary knowledge, skills, and abilities. All of the reviewed agencies reported varying levels of effort to address vacancies, skills gaps, and succession planning. HHS reported vacancies in its ethics program as well as challenges in recruiting and hiring; however ethics program officials took actions to mitigate negative effects of the vacancies. As of October 1, 2018, HHS’s Ethics Division had six vacancies out of 32 full-time positions (a vacancy rate of approximately 19 percent), including the Alternate DAEO position. HHS officials told us that a senior attorney was assigned to assume the duties of the Alternate DAEO position for six months in 2018. HHS ethics officials told us that the 2017 government-wide hiring freeze and workforce reduction plan affected their efforts to fill vacancies. However, ethics officials also told us that, as of October 1, 2018, four people had tentatively accepted offers to fill vacancies. HHS ethics officials told us that applicants for ethics attorneys and specialist positions generally do not have a background in federal government ethics laws. As a result, Ethics Division officials said that it must invest time and resources to train new hires, who attend and review OGE trainings, participate in monthly interagency ethics meetings, and take HHS-specific ethics training. HHS ethics officials told us that new ethics program hires are assigned work from across the spectrum of ethics subject matter and trained one- on-one by senior staff. To address staffing shortages and prepare for potential attrition, the HHS ethics officials said they cross-train staff members and assign back-up team members to support HHS’s operating and staff divisions. In addition, to track potential staff attrition or retirement, the ethics officials told us that the Ethics Division uses OPM’s Federal Employee Viewpoint Survey data collected from HHS employees. However, the data only give the Ethics Division a general sense of the number of personnel that are planning to leave or retire. HHS Ethics Division officials said they use survey data because there is a general sensitivity related to asking about retirement and delays in planned retirements that could affect recruiting and hiring replacements. Interior’s ethics office also reported vacancies and challenges in recruiting and hiring that contributed to the issues in the ethics program. As of November 2018, the Interior ethics office reported that out of 14 full-time positions, four were vacant (a 29 percent vacancy rate). All vacancies were ethics attorney positions. Interior reported an ongoing transformation of the department’s ethics program and officials said that the vacancies resulted from prioritizing the staffing at individual bureaus— such as the National Park Service and Fish and Wildlife Service—instead of the department-level ethics office, which is responsible for overseeing the bureaus’ ethics programs and providing ethics services to employees at the Office of the Secretary, the Office of the Solicitor, and to all of Interior’s political appointees. Interior’s ethics officials said that the high vacancy rate in their ethics office affected its ability to properly collect and review financial disclosure forms—one of the main responsibilities of the federal ethics program. According to Interior’s new DAEO, the office received an influx of financial disclosure reports during the presidential transition, but was unprepared to handle them. Furthermore, during 2017 one official was responsible for reviewing and certifying more than 300 public financial disclosure forms. The official was unable to balance proper and timely review of forms with other responsibilities that also included reviewing and certifying more than 800 confidential disclosure forms. In the Interior Inspector General’s 2018 report on Interior’s Major Management Challenges, ethics staffing was identified as a limitation, as staffing shortages could lead to delays in reviewing appointees’ financial disclosure documentation. While the single Interior official was experienced in reviewing financial disclosure forms, Interior officials stated that there was not enough management support, training, or resources provided to properly review financial disclosure forms in 2017. According to the DAEO, a new supervisory ethics official for financial disclosure forms was hired in September 2018 as part of a proposed and ongoing organizational restructuring of Interior’s ethics office. In addition, Interior posted a job announcement for a second ethics attorney and now has two ethics specialists for financial disclosures. The DAEO stated that the ethics program also plans to increase the number of ethics officials that review and certify financial disclosures, and has established new program goals, such as improving ethics staff competencies for technical review of financial disclosure reports. Interior ethics officials also reported that the government-wide hiring freeze affected their ability to hire staff and address ethics program staff continuity. To build capacity within the ethics program and create a strong ethical culture at the agency and bureau levels, the Acting Deputy Secretary recommended in May 2017 that Interior develop a structure and staffing plan to have a full-time ethics official for every 500 employees by fiscal year 2020. On October 26, 2018, Interior officials stated that the ethics program was implementing the Acting Deputy Secretary’s staffing plan. However, OGE benchmarking guidance states that there is no “right” ratio for the number of ethics staff per employee, and that agencies should determine their ratio based on certain aspects of individual ethics programs, such as the scope of potential conflicts and the complexity of financial disclosure reports. Interior officials could not explain how the ratio was determined nor provide a strategy for achieving the goal or evaluating whether the ratio is meeting the needs of the department in the future. We have previously identified leading practices for human capital management; these practices include that agencies should determine the workforce skills and competencies needed to achieve current and future goals and objectives as well as identify and develop strategies to address gaps. In addition, agencies should continually assess and improve human capital planning and investment, and assess the impact on accomplishing the mission. Without having a better understanding of resource needs and documenting how to properly allocate and determine needed resources, Interior may not accurately estimate its needs and may not be best positioned to assess and strengthen its ethics workforce to achieve program goals and objectives. Moreover, staff turnover at the Interior ethics office also reduced institutional knowledge. For example, Interior’s ethics office could not produce the documentation of the policies and procedures that support its ethics program—an internal control requirement—such as documenting and providing written responses to ethics queries and the tools used to ensure short and long-term continuity of operations. However, the ethics office previously provided documented evidence of some of these policies and procedures in its response to OGE’s 2016 program review. Interior ethics officials stated that the OGE response was produced prior to the DAEO retiring and drafted by staff who no longer work at Interior. Standards for Internal Control also require agencies to document key processes and procedures to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parties, such as external auditors. Both HHS and SBA provided documentation of ethics program policies and procedures while Interior did not provide documentation. Since there was no formal documentation of the ethics program’s policies and procedures, Interior ethics officials stated that the ethics office will document them as part of its organizational restructuring plans. As of March 2019, Interior officials had not provided this documentation. For example, the ethics program is to ensure that all ethics related advice, legal analyses, and conclusions are documented. However, without Interior completing the documentation of its policies and procedures and making them accessible to staff, institutional knowledge may be lost, and there is greater risk of not achieving the goals and objectives of the ethics program. SBA did not report challenges to recruiting or staff continuity in part because of the small size of the ethics program. SBA’s ethics program is administered by three full-time officials and during our review, the DAEO position was vacant for more than 3 months due to the retirement of the previous DAEO. However, the Alternate DAEO assumed the responsibility for managing the ethics program until a new DAEO was hired in August 2018. Ethics officials reported that the program could draw upon a pool of field attorneys previously designated to perform collateral ethics duties to temporarily address disruptions in staffing. To address continuity and succession, SBA ethics officials reported that a headquarters staff attorney was detailed to the ethics program to prepare for the possible retirement of its current Alternate DAEO. Conclusions Strong ethics programs are critical to ensuring public trust in government and the integrity of actions taken on the public’s behalf. The executive branch ethics program is a shared responsibility across government. Political appointees, in particular agency heads, have a personal responsibility to exercise leadership in ethics. Some data are available on political appointees serving in the Executive Branch but the data have limitations that impede their usefulness. To facilitate independent review and analysis related to political appointees, members of the public need access to information on who is serving in political appointee positions. Otherwise, they are limited in their ability to discern whether appointees are performing their duties free of conflict. Information on the political appointees serving in the executive branch at any point in time would also facilitate congressional oversight. Both OPM and PPO are positioned to report these data, but there are some benefits and drawbacks of each agency’s current capacity that will need to be considered. Ultimately, it is a policy decision as to which agency is best positioned to report comprehensive and timely data on political appointees. Further, a robust internal control system is critical for agency ethics programs to achieve their mission of preventing conflicts of interest on the part of their employees. Without effective internal controls, agency ethics programs cannot reasonably assure that they are mitigating the risk—or the appearance of—public servants making biased decisions when carrying out the governmental responsibilities entrusted to them. During the course of our review SBA took steps to establish written procedures for initial ethics training, but still needs to complete the implementation of procedures to track and verify that all political appointees meet ethics training requirements. As Interior continues to reorganize its ethics program, improved strategic workforce planning can help to accurately assess its needs, maintain continuity, and achieve program goals and objectives. Finally, ensuring that Interior’s ethics processes and procedures are fully documented and easily accessible to staff can help mitigate the risk of reduced institutional knowledge, and can improve the ability to communicate with external parties. Matter for Congressional Consideration Congress should consider legislation requiring comprehensive and timely information on political appointees serving in the executive branch to be collected and made publicly accessible. (Matter for Consideration 1) Recommendations for Executive Action We are making a total of three recommendations, including one to SBA, and two to Interior. The Administrator of the Small Business Administration should implement procedures to track and verify that required employees complete initial ethics training and that completion of this training is documented. (Recommendation 1) The Secretary of the Interior should direct the Departmental Ethics Office, in conjunction with the Chief Human Capital Officer, to develop, document, and implement a strategic workforce planning process that aligns with its ongoing departmental reorganization and that is tailored to the specific needs of the ethics program. As part of this process, Interior should monitor and assess the critical skills and competencies that its ethics program needs presently and is projected to need in the future. (Recommendation 2) The Secretary of the Interior should ensure that the department’s ethics program policies and procedures are documented and easily accessible to program staff. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report for comment to the Department of Justice (DOJ), the White House Counsel’s Office at the Executive Office of the President (EOP), the Department of Health and Human Services (HHS), the Department of the Interior (Interior), the Inspector General of the Department of the Interior (OIG), the Office of Government Ethics (OGE), the Office of Personnel Management (OPM), and the Small Business Administration (SBA). Interior, SBA, and OGE provided written comments, which are reproduced in appendixes IV, V, and VI respectively. Interior officials concurred with our recommendations and described steps they are taking to begin addressing them. In our draft report, we made two recommendations to SBA. Our first recommendation was that SBA establish written procedures for initial ethics training as required. SBA officials did not agree or disagree with this recommendation, but during their review of the draft report, they provided documentation to show that they had established written procedures in line with our draft recommendation. As such, we revised our final report to include the actions taken by SBA in February 2019 and to delete our recommendation to establish written procedures for initial ethics training. With regard to our second draft recommendation to SBA, which remains in our final report as our first recommendation, SBA again did not agree or disagree with the recommendation. SBA officials provided documentation to support that they have taken initial steps to address our recommendation to implement procedures to track and verify completion of initial ethics training by political appointees. We plan to assess the implementation of these new procedures to confirm that, in operation, these procedures meet the intent of our recommendation. In addition to the written comments we received, SBA, HHS, OGE, and OPM provided technical comments, which we incorporated as appropriate. DOJ and the Interior OIG had no comments on the draft report. EOP did not respond to our request for comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 1 day from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Attorney General of DOJ, the White House Counsel, the Secretary of HHS, the Acting Secretary of the Interior, the Acting Inspector General at the Interior, the Director of OGE, the Acting Director of OPM, the SBA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to evaluate the extent to which (1) existing data identify political appointees serving in the executive branch at any point in time, and (2) selected agencies use appropriate internal controls to reasonably ensure that their ethics programs are designed and implemented to meet statutory and regulatory requirements. To evaluate the extent to which data identifying political appointees serving in the executive branch at any point in time exist, we first synthesized requirements for reporting and developed criteria for comprehensive and timely reporting. We reviewed relevant laws and standards, and the United States Government Policy and Supporting Positions (Plum Book). We used the Office of Management and Budget’s Open Government Directive (M-10-06) memorandum to develop criteria for transparency and public availability. We interviewed officials from the Office of Personnel Management (OPM) to understand the extent to which data they collect on current political appointees are comprehensive, timely, and reportable. OPM provided data on the political appointees serving in the federal government between January 2017 and June 2018. We also requested and obtained information from OPM on the volume of Freedom of Information Act requests for data on political appointees to assess demand for this type of data. To further evaluate public demand for political appointee data, we interviewed two nongovernmental organizations that track political appointees in the executive branch, ProPublica, and the Partnership for Public Service. We gathered information on the public’s demand for information regarding political appointees, and the use and limitations of data. Both organizations provided statistics quantifying public demand, including number of unique visitors to their website and media impressions. Media impressions are any viewing of or interaction with a piece of content. We requested information or interviews with the Office of Presidential Personnel (PPO) and several White House Liaisons to understand how they track, maintain, and use data on political appointees serving in the executive branch. A senior leader at PPO and one White House Liaison acknowledged our request for an interview but deferred to the White House Counsel’s Office. As well, an ethics officer indicated they would be unable to facilitate the exchange of information with the White House Liaison Office in their agency. The White House Counsel’s Office did not acknowledge requests for information or interviews. We interviewed former senior PPO officials from the two previous administrations to understand how they tracked, maintained and used data on political appointees. To identify internal control processes and determine the extent to which selected agencies use appropriate controls to ensure their ethics programs are designed and implemented to meet statutory and regulatory requirements, we first identified four case study agencies. We selected a range of case study agencies based on the number and type of political appointees as well as the strength of their ethics programs, as determined by Office of Government Ethics (OGE) reviews. Using data from the 2016 Plum Book, we identified the total number of political appointee positions within each agency or department across the following four categories: presidential appointees with Senate confirmation (PAS), presidential appointees, noncareer members of the Senior Executive Service, and Schedule C appointees. We selected the Executive Office of the President (EOP) as a case study agency because EOP has the largest number of presidential appointees, and because OGE has not recently conducted a program review of EOP. According to OGE, ethics program reviews are a primary means of conducting systematic oversight of executive branch ethics programs. OGE completed a review of each agency between January 2014 and January 2018. Since the White House Counsel’s Office did not acknowledge receipt of our notification letter we could not review EOP’s practices. To allow for more comparability among case studies, we excluded agencies and departments that did not have at least one PAS, and one presidential appointee or noncareer member of the Senior Executive Service. From the remaining list of departments and agencies, we excluded those with nine or fewer total political appointee positions. We divided the remaining agencies into two groups: large agencies with more than 100 political appointees and small agencies with fewer than 100 political appointees. To ensure we observed a range of practices, we selected a large agency with no recommendations in its most recent OGE program review—the Department of Health and Human Services and an agency with multiple unaddressed recommendations from its most recent OGE program review—the Department of the Interior. To select our final case study, we used human resources data from OPM’s FedScope tool to determine the number of employees at each agency as of September 2017. We limited our selection to noncabinet agencies with between 2,000 and 10,000 employees. Out of the four remaining agencies, we randomly selected the Small Business Administration. To evaluate the extent to which the three reviewed agencies have and use appropriate internal controls to reasonably ensure that the objectives of their ethics programs are achieved, we reviewed selected principles from Standards for Internal Control in the Federal Government based on our review, analysis and professional judgment as to which were relevant to effectively execute an executive branch ethics program. Selected internal control principles included: 3.01: Management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives; 4.01: Management should demonstrate a commitment to recruit, develop, and retain competent individuals; 10.01: Management should design control activities to achieve objectives and respond to risks; and 14.01: Management should internally communicate the necessary quality information to achieve the entity’s objectives. Reviewed agencies confirmed that these internal control principles were relevant to effectively execute their ethics program. We provided each agency with an identical set of questions based on the selected internal control principles and components. We used agency responses to questions and supporting documentation to evaluate whether agencies’ policies and processes to oversee ethics compliance for political appointees were consistent with the internal control principles. We used a nongeneralizable random sampling method to select political appointees whose documentation we would review for compliance with certain ethics requirements. Agencies provided data detailing the political appointees within the agency at any point in time beginning January 20, 2017 and as of January 28, 2018. To assess the reliability of the data, we asked each agency’s officials about how the data were obtained, where the data came from, and what steps, if any, they each took to assure the accuracy and completeness of the data. Officials at each agency knowledgeable about their data provided responses. Based on those responses, we determined that the data were sufficiently reliable to indicate each agency’s political appointees, with start and end dates, for use in selecting a sample of appointees at each agency. Within each agency, we used random sampling to identify up to three PAS appointees and up to nine non-PAS appointees, including up to three appointees that separated from the agency during the time frame above. Each case study agency completed a data collection instrument that identified the applicable ethics requirements for each selected appointee. Each agency provided documentation to communicate how those requirements were met for each appointee. We reviewed the documentation to determine whether agency internal controls were sufficient to ensure that certain ethics program requirements were met. In addition, we conducted interviews with agency ethics officials, as needed, to discuss documentation provided. We also conducted several interviews with OGE officials to inform how we developed the data collection instrument and evaluate appointee compliance in alignment with OGE’s principles and practices. Our review of political appointees’ documentation was limited to testing the sufficiency of the agencies’ ethics program processes and procedures. We did not review financial disclosure forms with the intent of identifying conflicts of interest nor did we perform a conflict of interest analysis. Also, because we used a nongeneralizable sample of political appointees, results from the sample cannot be used to make inferences about all the agencies’ political appointees. We conducted this performance audit from October 2017 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Use of Internal Controls in Reviewed Agencies’ Ethics Programs Appendix II: Use of Internal Controls in Reviewed Agencies’ Ethics Programs Has the agency established an organizational structure for its ethics program? Management should demonstrate a commitment to recruit, develop, and retain competent individuals. Are agency ethics program staff evaluated? Are agency ethics program staff’s expectations developed and documented? Does the agency commit resources to the ethics program? Does the agency recruit, develop, and train ethics program staff? Does the agency prepare alternate or contingency plans for ethics program staff attrition, succession, or other potential disruptions to staff levels? Management should design control activities to achieve objectives and respond to risks. Does the agency have goals and objectives for the ethics program? Are these goals and objectives documented? Does the agency have processes and procedures in place to support the goals and objectives of the ethics program? Does the agency have processes and procedures in place to ensure political appointees who are not Presidential Appointees with Senate Confirmation do not undertake an activity that represents an actual or apparent conflict of interest? Does the agency have processes and procedures in place to ensure that political appointees receive required training? Management should internally communicate the necessary quality information to achieve the entity’s objectives. Does the agency communicate ethics program related information to political appointees? Signed the Executive Order 13770, “Ethics Pledge” Presidential Appointee with Senate confirmation (PAS) nominee financial disclosure report filed no later than 5 days after nomination by the President PAS nominee signed an Ethics Agreement to address identified conflicts of interest Non-PAS new entrant financial disclosure report filed within 30 days of assuming the duties of the position, or within extension of time for filing Received live ethics briefing within 15 days of appointment (PAS only) Termination financial disclosure report filed within 30 days of leaving government (if appointee departed from the agency) Because we used a nongeneralizable sample of political appointees, results from the sample cannot be used to make inferences about all of the agencies’ political appointees. Appendix IV: Comments from the Department of the Interior Appendix V: Comments from the U.S. Small Business Administration Appendix VI: Comments from the United States Office of Government Ethics Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the above contact, Melissa Wolf and Carol Henn (Assistant Directors), Erinn L. Sauer (Analyst-in-Charge), Caitlin Cusati, Ann Czapiewski, Robert Gebhart, Travis Hill, James Lager, Brittaini Maul, Steven Putansu, Mary Raneses, Andrew J. Stephens, and Mackenzie D. Verniero made major contributions to this report.
Why GAO Did This Study Federal agencies' ethics programs seek to prevent conflicts of interest and safeguard the integrity of governmental decision-making. GAO was asked to review compliance with ethics requirements for political appointees in the executive branch. This report examines the extent to which (1) existing data identify political appointees serving in the executive branch, and (2) selected agencies use internal controls to reasonably ensure that their ethics programs are designed and implemented to meet statutory and regulatory requirements. GAO reviewed available data on political appointees. GAO also reviewed three case study agencies selected to provide a range in agency size and number of political appointees. GAO reviewed ethics documentation for a nongeneralizable sample of political appointees at the three agencies at any point between January 2017 and 2018 and interviewed officials from the agencies and two non-governmental organizations. What GAO Found There is no single source of data on political appointees serving in the executive branch that is publicly available, comprehensive, and timely. Political appointees make or advocate policy for a presidential administration or support those positions. The Office of Personnel Management (OPM) and two nongovernmental organizations collect, and in some cases, report data on political appointees, but the data are incomplete. For example, the data did not include information on political appointee positions within the Executive Office of the President. The White House Office of Presidential Personnel (PPO) maintains data but does not make them publicly available. The public has an interest in knowing the political appointees serving and this information would facilitate congressional oversight and hold leaders accountable. As of March 2019, no agency in the federal government is required to publicly report comprehensive and timely data on political appointees serving in the executive branch. OPM is positioned to maintain and make political appointee data publicly available on a timely basis but is limited in its ability to provide comprehensive data. PPO has more comprehensive data but may not be positioned to publish data on a recurring basis. Ultimately, it is a policy decision as to which agency is best positioned to report comprehensive and timely data on political appointees. All three agencies GAO reviewed generally used appropriate internal controls to ensure they met basic ethics program requirements, though two of the agencies could take actions to strengthen their ethics programs. The Departments of Health and Human Services (HHS), and the Interior (Interior), and the Small Business Administration (SBA) all have procedures for administering their financial disclosure systems. HHS and Interior had procedures for providing initial ethics training as required beginning in January 2017. Prior to February 2019 SBA did not have written procedures for initial ethics training and did not adequately document political appointees' training dates. SBA's written procedures now reflect the requirements of initial ethics training and SBA developed a tracking sheet to indicate appointees completed training. GAO will assess the implementation of the tracking sheet to confirm the process is sufficient for documenting appointees' completion of initial ethics training. Interior's ethics program has human capital and workforce continuity challenges. Interior reported that four out of 14 full-time positions were vacant. Interior officials attributed the vacancies to a recent transformation of the ethics program and prioritizing the staffing at individual bureaus such as the National Park Service. However, vacancies affected the ethics program's ability to properly document policies and procedures as well as file and review financial disclosure forms. According to Interior officials, steps are being taken to address vacancies and document policies and procedures. However, GAO found that a more strategic and documented approach would enable Interior to better manage human capital, fill key positions, and maintain institutional knowledge. What GAO Recommends Congress should consider legislation requiring the publication of political appointees serving in the executive branch. GAO also recommends three actions: SBA should document that training was completed; Interior should conduct more strategic planning for its ethics workforce and document ethics program policies and procedures. SBA neither agreed nor disagreed with GAO's recommendation, but provided documentation that partially addresses the recommendation. Interior agreed with GAO's recommendations.
gao_GAO-20-509
gao_GAO-20-509_0
Background Overview of the National Flood Insurance Program In 1968, Congress created NFIP, with the passage of the National Flood Insurance Act, to help reduce escalating costs of providing federal flood assistance to repair damaged homes and businesses. According to FEMA, NFIP was designed to address the policy objectives of identifying flood hazards, offering affordable insurance premiums to encourage program participation, and promoting community-based floodplain management. To meet these policy objectives, NFIP has four key elements: identifying and mapping flood hazards, floodplain management, flood insurance, and incentivizing flood-risk reduction through grants and premium discounts. NFIP enables property owners in participating communities to purchase flood insurance and, in exchange, the community agrees to adopt and enforce NFIP minimum floodplain management regulations and applicable building construction standards to help reduce future flood losses. A participating community’s floodplain management regulations must meet or exceed NFIP’s minimum regulatory requirements. Insurance offered through NFIP includes different coverage levels and premium rates, which are determined by factors that include property characteristics, location, and statutory provisions. NFIP coverage limits vary by program (Regular or Emergency) and building occupancy (for example, residential or nonresidential). In NFIP’s Regular Program, the maximum coverage limit for one-to-four family residential policies is $250,000 for buildings and $100,000 for contents. For nonresidential or multifamily policies, the maximum coverage limit is $500,000 per building and $500,000 for the building owner’s contents. Separate coverage is available for contents owned by tenants. NFIP also offers Increased Cost of Compliance coverage for most policies, which provides up to $30,000 to help cover the cost of mitigation measures following a flood loss when a property is declared to be substantially or repetitively damaged. Flood Hazard Mapping Through NFIP, FEMA maps flood hazard zones on a Flood Insurance Rate Map, which participating NFIP communities must adopt. According to FEMA, floodplain management standards are designed to prevent new development from increasing the flood threat and to protect new and existing buildings from anticipated flooding. FEMA has a division responsible for flood mapping activities and policy and guidance, but stakeholders from various levels of government and the private sector participate in the mapping process, as appropriate. A community’s Flood Insurance Rate Map serves several purposes. They provide the basis for setting insurance premium rates and identifying properties whose owners are required to purchase flood insurance. Since the 1970s, homeowners with federally backed mortgages or mortgages held by federally regulated lenders on property in a special flood hazard area have been required to purchase flood insurance. Others may purchase flood insurance voluntarily if they live in a participating community. The maps also provide the basis for establishing minimum floodplain management standards that communities must adopt and enforce as part of their NFIP participation. As of May 2020, 22,487 communities across the United States and its territories voluntarily participated in NFIP by adopting and agreeing to enforce flood-related building codes and floodplain management regulations. Community-Level Flood Hazard Mitigation FEMA supports a variety of community-level flood mitigation activities that are designed to reduce flood risk (and thus NFIP’s financial exposure). These activities, which are implemented at the state and local levels, include hazard mitigation planning; adoption and enforcement of floodplain management regulations and building codes; and use of hazard control structures such as levees, dams, and floodwalls or natural protective features such as wetlands and dunes. FEMA provides community-level mitigation funding through its HMA grant programs. In addition, FEMA’s Community Rating System is a voluntary incentive program that recognizes and encourages community floodplain management activities that exceed the minimum NFIP requirements. Flood insurance premium rates are discounted to reflect the reduced flood risk resulting from community actions that meet the three goals of reducing flood damage to insurable property, strengthening and supporting the insurance aspects of NFIP, and encouraging a comprehensive approach to floodplain management. Property-Level Flood Hazard Mitigation At the individual property level, mitigation options include property acquisition—or “buyouts”—to either demolish a building for green space or relocate a building to a low flood risk area, elevation, or floodproofing. Acquisition and demolition (acquisition) is one of the primary methods by which states or localities use FEMA funding to mitigate flood risk. Through this process, a local or state government purchases land and structures that flooded or are at risk from future floods from willing sellers and demolishes the structures. The community restricts future development on the land, which is maintained as open space in perpetuity to restore and conserve the natural floodplain functions. According to FEMA officials, an advantage of property acquisition is that it offers a permanent solution to flood risks, whereas other mitigation methods make properties safer from floods but not immune. Property acquisition and demolition is a voluntary process, and property owners are paid fair market value for their land and structures. Acquisition is typically done on a community-wide scale, purchasing several or all properties in an at-risk neighborhood. Acquisition projects typically require building consensus from property owners and sustained communication and collaboration between residents and the government executing the project. Acquisition and relocation (relocation) refers to purchasing a structure and moving it to another location instead of demolishing it. Through this process, state or local governments use FEMA funding to help purchase land from willing sellers and assist the property owners with relocating the structure. The structure must be sound and feasible to move outside of flood-prone areas. Relocation is a voluntary process and property owners are paid fair market value for their land. Elevation involves raising a structure so that the lowest occupied floor is at or above the area’s base flood elevation. Structure elevation may be achieved through a variety of methods, including elevating on continuous foundation walls; elevating on open foundations, such as piles, piers, or columns; and elevating on fill. Structures proposed for elevation must be structurally sound and capable of being elevated safely. Further, elevation projects must be designed and adequately anchored to prevent flotation, collapse, and lateral movement of the structure from flooding, waves, and wind. Floodproofing falls into two categories: dry floodproofing and wet floodproofing. Dry floodproofing involves sealing a structure to prevent floodwater from entering. Examples of dry floodproofing measures include using waterproof coatings or coverings to make walls impermeable to water, installing waterproof shields, and installing devices that prevent sewer and drain backup. Dry floodproofing is appropriate only where floodwaters do not exceed three feet, the speed of flood waters is low, and the duration of flooding is relatively short because walls and floors may collapse from the pressure of higher water levels. Wet floodproofing involves changing a structure to allow floodwaters to enter and exit with minimal damage. Wet floodproofing is used in parts of a structure that are not used as living space, such as a crawlspace, basement, or garage. Examples of wet floodproofing measures include installing flood openings in the foundation and enclosure walls below the base flood elevation, using flood-resistant building materials and furnishings located below the base flood elevation, and either elevating or floodproofing all utility systems and associated equipment to protect them from damage. FEMA Mitigation Grant Programs FEMA administers three HMA grant programs that can be used to fund flood mitigation projects: the Hazard Mitigation Grant Program (HMGP), Pre-Disaster Mitigation (PDM), and Flood Mitigation Assistance (FMA). Eligible HMA applicants include states, territories, and federally recognized tribal governments. Local communities cannot apply directly to FEMA for HMA funding but instead must collaborate as sub-applicants with their state, territory, or tribal government and then receive funding through that entity. Certain nonprofit organizations can act as sub- applicants but only under HMGP. Generally, individuals may not apply for HMA funding, but they may benefit from a community application. Applicants to all three programs must have FEMA-approved hazard mitigation plans. FEMA evaluates HMA applications based on technical feasibility and cost-effectiveness, among other factors. In fiscal year 2019, HMA awarded $859 million in funding. Eligible activities differ for the three programs but must be consistent with FEMA’s National Mitigation Framework. The Hazard Mitigation Grant Program helps communities implement hazard mitigation measures following a presidential major disaster declaration to improve community resilience to future disasters. HMGP provides funding to protect public or private property through various mitigation measures based on state or tribal priorities. Mitigation project examples include acquisition, relocation, retrofitting structures to minimize damages from various natural hazards, and elevating flood prone structures. HMGP recipients (states, territories, and federally recognized tribal governments) are primarily responsible for prioritizing, selecting, and administering state and local hazard mitigation projects. According to FEMA guidance, although individuals may not apply directly to the state for assistance, local governments engage interested property owners during the application process. A formula based on the size of the presidential disaster declaration determines the amount of money available to HMGP. Pre-Disaster Mitigation seeks to reduce overall risk to the population and structures from future natural hazard events, while also reducing reliance on federal funding in future disasters. PDM grants fund mitigation plans and eligible projects that reduce or eliminate long-term risk to people and property from natural disasters, such as property acquisition, property elevation, earthquake hardening, and construction of tornado and high-wind safe rooms. Generally, local governments (i.e., sub-applicants) submit mitigation planning and project applications to their state, territory, or federally recognized tribal government (i.e., applicants) for review and prioritization. The state, territory, or federally recognized tribal government then submits one PDM grant application to FEMA for consideration. Annual Congressional appropriations fund these grants, and FEMA awards them on a nationally competitive basis. In fiscal year 2019, Congress appropriated $250 million to PDM, which was the program’s final year of funding. In 2018, Congress passed the Disaster Recovery Reform Act, which included amendments to PDM, which FEMA calls the Building Resilient Infrastructure and Communities program. According to FEMA officials, this program is replacing PDM in fiscal year 2020 and will be funded through the Disaster Relief Fund as a 6 percent set-aside from the estimated total amount of grants for each major disaster declaration. FEMA has solicited public input on the program and said it expects to release a notice of funding opportunity in summer 2020. Flood Mitigation Assistance is designed to reduce or eliminate flood insurance claims by funding cost-effective flood mitigation projects that reduce or eliminate long-term risk of flood damage to structures insured under NFIP. Typical projects may include acquisition of RL properties, elevation of buildings, and neighborhood-scale flood defense investment. Generally, local communities will sponsor applications on behalf of homeowners and then submit the applications to their state. A state or federally recognized tribal government must submit the grant applications to FEMA. Annual Congressional appropriations fund FMA grants, and FEMA awards them on a nationally competitive basis. FMA appropriations have remained relatively stable at about $175 million for fiscal years 2016 through 2019. Repetitive Loss Properties RL properties present a financial challenge for NFIP. FEMA has three definitions for such properties that vary slightly to meet the specific needs of different programs: NFIP Repetitive Loss refers to an NFIP-insured structure that has incurred flood-related damage on two occasions during a 10-year period, each resulting in at least a $1,000 claim payment. FEMA uses the NFIP RL definition for insurance purposes related to the Community Rating System, for local hazard mitigation plans, and for eligibility determinations for preferred risk policies and individual assistance. FMA Repetitive Loss refers to an NFIP-insured structure that (a) has incurred flood-related damage on two occasions in which the cost of repair, on average, equaled or exceeded 25 percent of the value of the structure at the time of each such flood event; and (b) at the time of the second incidence of flood-related damage, the flood insurance policy contained Increased Cost of Compliance coverage. FEMA uses this definition for FMA purposes, as these properties are eligible for the largest federal cost share for mitigation, up to 90 percent. This is also the same definition NFIP uses to approve an Increased Cost of Compliance payment. Severe Repetitive Loss refers to an NFIP-insured structure that has incurred flood-related damage for which (a) four or more separate claims have been paid that exceeded $5,000 each and cumulatively exceeded $20,000; or (b) at least two separate claim payments have been made under such coverage, with the cumulative amount of such claims exceeding the fair market value of the insured structure. FEMA has two severe RL definitions for mitigation and insurance, which are similar except that the insurance definition includes only residential structures, while the mitigation definition includes all structures. FEMA uses the severe RL definition for grant eligibility and cost share, the Community Rating System, and insurance rate setting. FEMA Grant Programs Are Key Funding Sources for Property Acquisition FEMA Funds Acquisitions through Three Grant Programs That Have Varying Characteristics and Funding Levels HMGP is the largest of FEMA’s three HMA programs and, unlike the others, it is based on the amount of disaster assistance a state or territory receives following a presidential disaster declaration (see table 1). PDM and FMA are smaller grant programs that receive annual appropriations and are not directly tied to an immediately preceding disaster. Because these programs do not require an immediate disaster declaration, FEMA considers them pre-disaster programs, as their intent is to mitigate potential damage before disasters occur. HMGP and PDM can be used for projects that mitigate the risk of many hazards, including flood, wind, fire, earthquake, and drought, but FMA can only be used to mitigate the risk of flood (see table 1). Furthermore, FMA funds can only be used to mitigate properties that are insured by NFIP, but HMGP and PDM funds can be used to mitigate properties without NFIP coverage. Properties mitigated in a special flood hazard area, where the structure remains on the parcel, must maintain a flood insurance policy after project completion. HMA grants fund a variety of methods to mitigate the flood risk of properties, including acquisition, elevation, relocation, and floodproofing. In most cases, HMA grants cover up to 75 percent of the project cost, and the grantee generally must contribute the remainder using nonfederal funds (although there are some exceptions, discussed below). However, PDM will cover up to 90 percent of project costs for communities that meet FEMA’s definition of small and impoverished. Moreover, FMA will cover up to 90 percent for projects that mitigate RL properties and up to 100 percent for severe RL properties. Funding levels for the three programs have varied over time because they have depended on disaster declarations and annual appropriations (see fig. 1). HMGP is the largest of the three programs—adjusted for inflation, annual HMGP grants have reached $2.9 billion, while PDM and FMA have never exceeded $300 million. According to FEMA officials, the estimated annual funding for the Building Resilient Infrastructure and Communities program, the successor to PDM, will average $300 million to $500 million, as it will be funded by a 6 percent set aside of annual estimated disaster grant expenditures. HMA funding also varies by state. Louisiana has obligated the most funding. After adjusting for inflation, it has obligated more than $3.1 billion from all three programs since HMGP was created in 1989, followed by California ($2.0 billion), Texas ($1.8 billion), New York ($1.6 billion), and Florida ($1.5 billion), while the bottom 18 states and territories each obligated less than $50 million (see fig. 2). Because HMGP is the largest program and is tied to presidential declarations, these totals reflect, in part, the extent to which states and territories have experienced natural disasters in this time period. States and Localities Can Use Other Federal Programs to Fund Cost Share Requirements for Acquisitions Typically, recipients of federal mitigation grants must use nonfederal funds to meet cost share requirements because federal law prohibits the use of more than one source of federal disaster recovery funding for the same purpose. However, according to FEMA, some federal programs are exempt from these requirements due to authorizing statutes and therefore may be used in concert with HMA funds. Department of Housing and Urban Development’s Community Development Block Grant (CDBG) program. The Department of Housing and Urban Development awards CDBG funds to state and local governments to support a variety of community and economic development needs. According to FEMA’s HMA Cost Sharing Guide, HMA applicants may use several categories of CDBG funds as a source of project cost share, as long as the project meets Department of Housing and Urban Development rules. CDBG Disaster Recovery funds are the most frequently used form of HMGP cost share from a federal agency, according to FEMA. FEMA Increased Cost of Compliance coverage. NFIP offers Increased Cost of Compliance coverage, which provides up to $30,000 for policyholders to fund mitigation efforts on their property if they experience substantial damage or if their structure is an RL property. Between 1997 and 2014, the vast majority (99 percent) of Increased Cost of Compliance claims met the substantially damaged property definition, according to a 2017 report from the University of Pennsylvania. Unlike CDBG, which is awarded to states and local governments, Increased Cost of Compliance is awarded directly to individuals. According to FEMA, it is eligible as an HMA nonfederal cost share because it is considered a direct contract between the insurer and policyholder. FEMA allows recipients to assign their funds to the community as part of a collective mitigation project, and the community is then obligated to provide HMA funding to any property owner who contributed Increased Cost of Compliance dollars toward the nonfederal cost share. As of September 2019, FEMA had closed more than 38,000 Increased Cost of Compliance claims with dates of loss since 1997, totaling more than $877 million. Small Business Administration disaster loans. Small Business Administration disaster loans provide up to $200,000 for repairing or replacing a primary residence and $40,000 for repairing or replacing personal items that have been affected by a disaster. The interest rate cannot exceed 4 percent for applicants unable to access credit elsewhere, and cannot exceed 8 percent for all others. Secondary or vacation homes are not eligible, but qualified rental properties may be eligible under the Small Business Administration’s business disaster loan program, which offers loans of up to $2 million. According to FEMA guidance, these loans can serve as a source of cost share if HMA grants are disbursed early enough; however, the differing award timelines often make these funding sources incompatible. Further, disaster loans may not be eligible in conjunction with HMA funds due to duplication of benefits, but general-purpose Small Business Administration loans are not subject to this restriction, according to FEMA. Other Federal and Nonfederal Programs Fund Acquisitions In addition to FEMA’s three HMA programs, other federal, state, and local programs have helped acquire properties. Community Development Block Grants. In addition to its use as a cost- share complement to HMA grants, states and communities can use CDBG Disaster Recovery funding as a stand-alone source of property acquisition funds, according to the Department of Housing and Urban Development. Availability of CDBG Disaster Recovery funds is subject to supplemental appropriations following a presidential disaster declaration and must be used in response to that specific disaster. CDBG Disaster Recovery funds are disbursed to state and local governments and not to individuals directly. However, the governmental recipient can award CDBG Disaster Recovery funds to private citizens, nonprofits, economic development organizations, businesses, and other state agencies. The Bipartisan Budget Act of 2018 appropriated funding for CDBG, of which the Department of Housing and Urban Development allocated almost $6.9 billion for CDBG mitigation funds for the first time, as a result of the 2015 to 2017 disasters. Unlike CDBG Disaster Recovery funds, which the recipient must use in response to a specific disaster, recipients may use CDBG Mitigation funds to mitigate risks from future disasters. U.S. Army Corps of Engineers’ National Nonstructural Committee. The Army Corps of Engineers (Corps) conducts a range of mitigation measures through the National Nonstructural Committee, including acquisitions, elevations, relocations, and floodplain mapping. Nonstructural refers to measures that attempt to mitigate the consequences of floods, as opposed to structural measures intended to prevent floods from occurring. According to the Corps, except for limited research funding, it does not offer grants for flood risk management projects, and large projects generally require specific authorization from Congress. However, the Corps’ Continuing Authority Program allows it to execute smaller projects at its discretion. For example, for one of the programs, the federal government funds 65 percent of a project’s cost, and the project sponsor must provide all land, easement, rights-of-way, relocations, and disposal areas required for the project. The sponsor’s cost share includes credit for provision of the requirements above and pre-approved work-in-kind, but at least five percent must be provided in cash. Department of Agriculture’s Natural Resources Conservation Service Emergency Watershed Protection Program. The Federal Agriculture Improvement and Reform Act of 1996 enables the Emergency Watershed Protection Program to purchase floodplain easements on residential and agricultural land for flood mitigation purposes and to return the land to its natural state. For agricultural and residential land, this program pays up to the entire easement value and also funds property demolition or relocation, according to the Department of Agriculture. Land generally must have flooded in the past year or twice within the previous 10 years to be considered eligible. State and local acquisition programs. While state and local governments are active participants in federal acquisition projects, some have also developed their own acquisition programs. These programs vary on the extent to which they rely on federal funds, if at all. For example: The Harris County Flood Control District, a special purpose district, in Texas acquired about 3,100 properties between 1985 and 2017, according to a 2018 report from Rice University, using a combination of FEMA grants, Corps funds, and local dollars. Charlotte-Mecklenburg Storm Water Services, a joint city-county utility in North Carolina, has acquired more than 400 homes since 1999. Initially, it primarily used federal funds, but now it uses almost solely stormwater fees and other local revenue to fund acquisitions. The utility’s Quick Buys program allows it to acquire properties soon after a flood, before homeowners invest in repairs, whereas federal acquisitions often occur after property owners have begun rebuilding, according to FEMA officials. New Jersey, through its Blue Acres program, plans to acquire up to 1,300 properties damaged by Superstorm Sandy. The program has used state funds, including $36 million in bonds, as well as more than $300 million in federal funding received from multiple agencies. FEMA Has Funded the Mitigation of Many Properties, but the Number of Repetitive Loss Properties Continues to Rise Most Flood Mitigation Spending Is Used for Property Acquisitions after Flooding Occurs Since 1989, the primary means by which FEMA has mitigated flood risk at the property level has been by funding property acquisitions. Acquisitions accounted for about 75 percent of FEMA’s $5.4 billion in flood mitigation spending, adjusted for inflation, from 1989 to 2018 (see fig. 3). Most of the remaining spending was used to elevate properties, with smaller amounts used to floodproof and relocate properties. The average federal cost-per-property was $136,000 for acquisitions and $107,000 for elevations, according to 2008-2014 FEMA data. As seen in figure 4, FEMA-funded property acquisitions have fluctuated over time but have generally increased since FEMA’s HMA programs began. For example, from 1989 through 1992—the first four years of HMGP funding and prior to the creation of PDM and FMA—less than $8 million, adjusted for inflation, was obligated for property acquisitions each year, resulting in fewer than 200 acquisitions each year (see fig. 4). The highest acquisition funding generally was associated with years that had significant flood events, such as Superstorm Sandy (2012) and Hurricanes Harvey, Irma, and Maria (2017). From fiscal years 1989-2018, approximately $3.3 billion of property acquisition funding, adjusted for inflation, occurred through HMGP, resulting in the acquisition of 41,458 properties (see fig. 5). HMGP represented about 90 percent of all property acquisitions and 82 percent of all acquisition funding, with PDM and FMA representing the remainder. As a result, most FEMA-funded acquisitions occurred following flood events. Most of the funding, adjusted for inflation, for HMGP’s and PDM’s flood mitigation projects has been for property acquisition (83 percent and 89 percent of total funds, respectively), while most FMA funding has been for elevation (49 percent). Despite Acquisition and Other Mitigation, Nonmitigated Repetitive Loss Properties Have Increased in Number Although FEMA mitigated more than 57,000 properties for flood risk from 1989 to 2018, including more than 46,000 through acquisition, the number of nonmitigated RL properties increased from 2009 to 2018. Figure 6 shows that this growth in the number of RL properties has outpaced efforts to mitigate their flood risk. From 2009 through 2018, FEMA’s inventory of new RL properties grew by 64,101. During this period, FEMA mitigated 4,436 RL properties through its three HMA programs, and an additional 15,047 were mitigated through other federal or state programs. As a result, the number of nonmitigated RL properties increased by 44,618—more than double the number of RL properties that were mitigated in that time period. Some States Have Mitigated More Properties than Others Relative to Their Population of Repetitive Loss Properties States varied in the extent to which they mitigated high-risk properties, including RL properties, between 1989 and 2018. While FEMA does not require a property to be an RL property to receive flood mitigation funding, the number of properties mitigated by a state relative to its population of RL properties provides context to its flood mitigation progress. For example, some states with large numbers of RL properties, such as Texas, Louisiana, Florida, and New York, mitigated few properties relative to their numbers of RL properties (see table 2). Other states, such as Missouri and North Carolina, have far fewer RL properties but have mitigated more properties relative to their numbers of RL properties. States also varied in their methods for flood mitigation (see table 2). For example, while property acquisition accounted for 81 percent of mitigated properties nationwide, it represented closer to half of mitigated properties in Virginia, New Jersey, and Florida and only 19 percent in Louisiana. According to some FEMA and local officials, high property values in some regions can make acquisitions cost prohibitive and other mitigation methods such as elevation more attractive because they do not incur the cost of purchasing the land. Many other factors could affect mitigation, including homeowners’ preferences. Further, the voluntary nature of FEMA’s HMA programs may limit states’ ability to acquire properties with known flood risk. According to FEMA, acquisition permanently addresses flood risk because, unlike elevation or floodproofing, it moves individuals and structures away from flood risk rather than mitigating a structure in place. In a subsequent report, we plan to explore in more detail the factors, including homeowner demand for acquisition, that have affected the extent to which states have used acquisition to mitigate flood risk. While Property Acquisitions Help Reduce Flood Risk for Properties, Insufficient Premium Revenue Perpetuates Fiscal Exposure NFIP represents a fiscal exposure to the federal government because its premium rates have not kept pace with the flood risk of the properties it insures. Addressing this imbalance would mean reducing the flood risk of the insured properties, increasing premium revenue, or some combination of both. Despite FEMA’s efforts to mitigate its insured properties’ flood risk, premium rates for many properties do not reflect the full estimated risk of loss. As we have reported previously, mitigation alone will not be sufficient to resolve NFIP’s financial challenges; structural reforms to the program’s premium rates will also be necessary. Recent Catastrophic Flood Events and Projections Indicate Potential Increases in Flood Risk NFIP’s total annual flood claim payments have grown in recent years, potentially indicating an increase in flood risk. For example, the eight years of the highest annual NFIP claims have all occurred since 2004, with particularly catastrophic flood events accounting for much of these claims: In 2005, claims reached $17.8 billion ($23.3 billion, adjusted for inflation), largely due to Hurricanes Katrina, Rita, and Wilma. In 2012, claims reached $9.6 billion ($10.7 billion, adjusted for inflation), largely due to Superstorm Sandy. In 2017, claims reached $10.5 billion ($11.0 billion, adjusted for inflation), largely due to Hurricanes Harvey, Irma, and Maria. These severe weather events appear to be contributing to the long-term increases in claims paid by NFIP, as would be expected with infrequent but severe events. As seen in figure 7, the amount of claims paid per policy, adjusted for inflation, does not show a steady increase in claims but rather substantial spikes in certain years associated with catastrophic flooding events. RL properties have contributed heavily to NFIP’s claims and, as noted earlier, the number of RL properties continues to rise despite FEMA’s mitigation efforts. Of the $69.7 billion in claims NFIP paid out from 1978 to 2019, $22.2 billion was for flood damage sustained by RL properties (32 percent). The frequency and intensity of extreme weather events, such as floods, are expected to increase in coming years due to climate change, according to the U.S. Global Change Research Program and the National Academies of Sciences. Further, numerous studies have concluded that climate change poses risks to many environmental and economic systems and a significant financial risk to the federal government. For example, according to the November 2018 National Climate Assessment report, the continued increase in the frequency and extent of high-tide flooding due to sea level rise threatens America’s trillion-dollar coastal property market. According to the National Oceanic and Atmospheric Administration, minor flood events (sometimes referred to as nuisance flooding) also are projected to become more frequent and widespread due to climate change. Several Categories of Premium Rates Do Not Fully Reflect Flood Risk While it is uncertain the exact extent to which flood risk has changed and will continue to change, NFIP’s fiscal exposure will persist as long as premium rates do not keep pace with flood risk. As we have been reporting since 1983, NFIP’s premium rates do not reflect the full risk of loss because of various legislative requirements and FEMA practices. To set premium rates, FEMA considers several factors, including location in flood zones, elevation of the property relative to the community’s base flood elevation, and characteristics of the property, such as building type, number of floors, presence of a basement, and year built relative to the year of the community’s original flood map. Most NFIP policies have premium rates that are deemed by FEMA to be full-risk rates, which FEMA defines as sufficient to pay anticipated losses and expenses. However, FEMA’s overall rate structure may not reflect the full long-term estimated risk of flooding, as discussed below. Subsidized rates. NFIP offers some policyholders subsidized rates—that is, rates that intentionally do not reflect the full risk of flooding. These premium rates are intended to encourage the widespread purchase of flood insurance by property owners and encourage floodplain management by communities. Subsidized rates generally are offered to properties in high-risk locations (special flood hazard areas) that were built before flood maps were created. FEMA staff said they have begun increasing rates for certain subsidized properties as prescribed under the Biggert-Waters Flood Insurance Reform Act of 2012 and the Homeowner Flood Insurance Affordability Act of 2014. In addition, the percentage of subsidized policies is decreasing. According to FEMA data, the percentage of NFIP policies receiving subsidized rates dropped from about 22 percent in July 2013 to about 17 percent in June 2019. In 2013, we recommended that FEMA obtain elevation information to determine full-risk rates for subsidized properties. As of January 2020, FEMA had not fully implemented this recommendation but was in the process of doing so. For example, FEMA had requested proposals from third-party vendors for obtaining the elevation information and was reviewing these proposals. This information remains necessary for FEMA to determine the adequacy of its premium rates and the costs of any subsidization. It will also allow Congress and the public to understand the amount of unfunded subsidization within the program and the federal fiscal exposure it creates. Grandfathered rates. FEMA allows some property owners whose properties are remapped into higher-risk flood zones to continue to pay the premium rate from the lower-risk zone. FEMA data show that about 9 percent of NFIP policies were receiving a grandfathered rate as of June 2019. In 2008, we recommended that FEMA collect data to analyze the effect of grandfathered policies on NFIP’s fiscal exposure. As of February 2020, FEMA officials said they had not fully implemented this recommendation but were in the process of doing so. The officials told us they had finished collecting data on grandfathered policies and that they planned to analyze it as they completed efforts to update their premium rate setting approach. Collection and analysis of data on grandfathered policies will help FEMA understand and communicate the extent to which these policies are contributing to NFIP’s fiscal exposure. Rates designated full-risk. As we reported in 2008 and 2016, it is unclear whether premiums FEMA considers to be full-risk actually reflect the full long-term estimated risk of loss. For example, NFIP full-risk premium rates do not fully reflect the risk of catastrophic losses or the expenses associated with managing them. Private insurers typically manage catastrophic risk using capital, reinsurance, and other instruments, such as catastrophe bonds, and include the associated expenses in premium rates. By contrast, FEMA has traditionally managed catastrophic risk by relying on its authority to borrow from Treasury. In January 2017, FEMA began purchasing reinsurance to transfer some of its flood risk exposure to the private reinsurance market. However, FEMA has not accounted for these expenses in setting its NFIP premium rates. Reinsurance could be beneficial because it would allow FEMA to recognize some of its flood risk and the associated costs up front through the premiums it must pay to the reinsurers rather than after the fact in borrowing from Treasury. However, because reinsurers must charge FEMA premiums to compensate for the risk they assume, reinsurance’s primary benefit would be to manage risk rather than to reduce NFIP’s expected long-term fiscal exposure. Insufficient Premium Revenue Contributes to NFIP’s Fiscal Exposure Congress has directed FEMA to provide discounted premium rates to promote affordability for policyholders but did not provide FEMA with dedicated funds to pay for these subsidies. As a result, premium revenue has been insufficient to pay claims in some years, requiring borrowing from Treasury to make up for the shortfall. While Congress passed reforms to NFIP in 1994 and 2004, neither set of actions sufficiently addressed program revenue. In 2005, Hurricanes Katrina, Rita, and Wilma hit the Gulf Coast and resulted in NFIP borrowing nearly $17 billion from Treasury to pay claims (see fig. 8). In July 2012, Congress passed the Biggert-Waters Flood Insurance Reform Act, which contained significant reforms to NFIP’s premium rates. But a few months later, Superstorm Sandy occurred, pushing NFIP’s debt to $24 billion. Following policyholders’ concerns about the rate increases authorized by the 2012 act, Congress slowed the pace of many of these rate increases in 2014 with the Homeowner Flood Insurance Affordability Act. In the fall of 2017, Hurricanes Harvey, Irma, and Maria occurred, prompting additional borrowing from Treasury and causing NFIP to reach its borrowing limit. In response, Congress canceled $16 billion of NFIP’s debt in October 2017, which allowed NFIP to pay claims from these storms. Since September 2017, NFIP has been operating under a series of short-term authorizations, the most recent of which expires in September 2020. As of March 2020, NFIP’s debt remained at $20.5 billion. To improve NFIP’s solvency and enhance the nation’s resilience to flood risk, we suggested in 2017 that Congress could make comprehensive reforms that include actions in six areas. We reported that it was unlikely that FEMA would be able to repay its debt and that addressing it would require Congress to either appropriate funds or eliminate the requirement that FEMA repay the accumulated debt. However, eliminating the debt without addressing the underlying cause of the debt—insufficient premium rates—would leave the federal taxpayer exposed to a program requiring repeated borrowing. To address NFIP’s fiscal exposure, there are two general approaches: decrease costs or increase revenue. Decreasing costs to the program in the form of claims involves mitigating insured properties’ flood risks. Mitigation can be very costly, but there will be some properties for which the cost to mitigate will be outweighed by the benefit of reduced flood risk and, ultimately, fiscal exposure. Mitigation may be a cost-effective option for those properties for which full-risk rates would be cost-prohibitive. Increasing revenue would require reforms to NFIP’s premium rates. FEMA has begun increasing rates on subsidized properties. But, as we suggested in 2017, Congress could remove existing legislative barriers to FEMA’s premium rate revisions. Members of Congress and others have raised concerns about such reforms because raising premium rates may make coverage unaffordable for some policyholders. To address these concerns, we suggested that all policies include full-risk premium rates, with targeted, means-based, appropriated subsidies for some policies. This would improve the program’s solvency while also addressing affordability concerns. Assigning full-risk premium rates to all policies would remove subsidies from those who do not need them, helping improve solvency. It would also more accurately signal the true flood risk to property owners and enhance resilience by incentivizing mitigation measures, such as acquisition. Means-based subsidies would ensure that property owners who needed help would get it, and an explicit appropriation for the subsidies would make their true cost transparent to taxpayers. We maintain that a comprehensive approach that includes mitigation and rate reform is needed to address NFIP’s fiscal exposure. Concluding Observations Because several categories of NFIP premium rates do not reflect the full risk of flood loss, FEMA has had to borrow $36.5 billion from Treasury to pay claims from several catastrophic flood events since 2005. To address this, some have suggested additional funding to mitigate RL properties. While we acknowledge that mitigation is part of the solution, we maintain that a more comprehensive approach is necessary to address the program’s fiscal exposure. We have made two recommendations to FEMA that, if implemented, could help inform Congress’ efforts to reform NFIP. In 2008, we recommended that FEMA collect information on grandfathered properties and analyze their financial effect on NFIP, and in 2013, we recommended that FEMA obtain elevation information on subsidized properties. By implementing these recommendations, FEMA would better understand NFIP’s fiscal exposure and be able to communicate this information to Congress. Further, we suggested in 2017 that Congress take a comprehensive approach to reforming NFIP. One important first step would be to implement full-risk premium rates for all policies, with appropriated means-based subsidies for some policies. Full-risk premium rates would remove subsidies from those who do not need them, helping improve solvency, and also more accurately signal the true flood risk to property owners and incentivize efforts to mitigate flood risk. Further, means- based subsidies would ensure that property owners who need help will get it, and having Congress explicitly appropriate for the subsidies would make the true cost of the subsidy transparent to taxpayers. While this would be an important step to putting NFIP on a sustainable path, comprehensive reform of the program should also address the other issues we have identified, including mitigating the flood risk of insured properties. Agency Comments We provided a draft of this report to the Department of Homeland Security for its review and comment. The agency provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report addresses the Federal Emergency Management Agency’s (FEMA) National Flood Insurance Program (NFIP). Our objectives were to examine (1) funding programs available for property acquisitions, (2) FEMA’s flood mitigation efforts, and (3) factors contributing to NFIP’s fiscal exposure. To describe funding programs available for property acquisitions, we reviewed authorizing legislation, the Code of Federal Regulations, and FEMA guidance and manuals, including the Hazard Mitigation Assistance Guidance and Cost Share Guide, to identify program characteristics, eligibility requirements, and application guidelines. To identify funding for these programs, we analyzed FEMA’s project-level Hazard Mitigation Assistance (HMA) data from its Enterprise Applications Development Integration and Sustainment system, which FEMA uses to track mitigation projects funded through its HMA grant programs. To summarize Increased Cost of Compliance coverage, which NFIP policyholders can use to fund mitigation efforts, we analyzed FEMA’s NFIP claims database to identify the number and amount of such claims. We also interviewed the FEMA officials responsible for administering these grant programs. Further, we identified other federal agency programs that can fund property acquisitions or meet cost share requirements and reviewed their authorizing legislation and their relevant federal regulations. Finally, to identify examples of state and local programs that have been used to fund property acquisitions, we reviewed academic reports, including from the University of North Carolina and Rice University. To review FEMA’s flood mitigation efforts, we analyzed FEMA’s project- level HMA data from the “Mitigation Universe” of its Enterprise Applications Development Integration and Sustainment system. We analyzed several variables in this dataset, including number of properties, federal share obligated, mitigation type category, grant program area, grant program fiscal year, and state. For the analyses by mitigation type category, we excluded projects (79 percent of the total records) that did not include a flood mitigation activity (those with values of “Other” or “Pure Retrofit”). Of the remaining records, 98 percent were “Pure,” meaning all properties within each project were of a single mitigation method type (acquisition, elevation, floodproof, or relocation). The remaining 2 percent were “Mixed,” indicating a project contained at least one acquisition and at least one elevation but could also contain other mitigation methods. For analyses by grant program area, we treated projects funded through the Severe Repetitive Loss and Repetitive Flood Claims grant programs as being part of the Flood Mitigation Assistance program and projects funded through the Legislative Pre-Disaster Mitigation program as being part of the Pre- Disaster Mitigation program. For data on the number of flood mitigated properties, we used the final number of properties mitigated by a project. For data on funding, we used the federal share of the project’s obligated funding. To analyze mitigated and nonmitigated repetitive loss (RL) properties, we summarized FEMA’s RL property mitigation report, which tracked the cumulative number of RL properties by year from June 2009 through June 2018. To describe the number of RL properties by state, we analyzed FEMA’s list of RL properties as of August 31, 2019, which included every property that at any point FEMA had designated as an RL property under any of its three definitions. The list included properties that had since been mitigated, as well as those that are no longer insured by NFIP. To examine factors contributing to NFIP’s fiscal exposure, we analyzed FEMA’s claims dataset as of September 30, 2019. This dataset includes the more than 2 million claims paid to NFIP policyholders since the beginning of the program. We excluded records whose status was “open” or “closed without payment.” Further, we excluded records whose year of loss was before 1978 because FEMA officials told us that that was the first year they considered their claims data to be reliable and complete. To identify factors that contribute to NFIP’s fiscal exposure and illustrate how this fiscal exposure has materialized and changed over time, we reviewed several of our previous reports and the Department of the Treasury’s statements of public debt. Finally, to summarize how flood risk could change in the future, we reviewed our previous reports on climate change. In general, we adjusted for inflation any dollar figures that we compared or aggregated across multiple years and indicated this accordingly. To do this, we used the Bureau of Labor Statistics’ Consumer Price Index for All Urban Consumers. To assess the reliability of all of the datasets we analyzed for this report, we requested and reviewed preliminary versions of the data and accompanying data dictionaries. We used the data dictionary to identify potential variables for use in our analyses and output statistics on these variables (e.g., frequencies of values, number of blanks or zero values, minimum, maximum, and mean) to identify any potential reliability concerns such as outliers or missing values. We met with relevant FEMA officials to discuss each of the data sets to understand how FEMA collected, used, and maintained the data; the reliability and completeness of key variables; reasons for any potential discrepancies we identified; and whether our understanding of the data and approach to analyzing them were accurate and reasonable. After these meetings, we requested updated versions of the data and updated our analyses accordingly. We determined that all data elements we assessed were sufficiently appropriate and reliable for this report’s objectives. We conducted this performance audit from January 2019 to June 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Significant Events and GAO Reports Related to the National Flood Insurance Program’s Fiscal Exposure January 1983: We recommended that FEMA improve its rate-setting process to ensure adequate income for NFIP and suggested that Congress either limit FEMA’s borrowing for extraordinary losses or establish an emergency fund for such losses, and pay for NFIP subsidies with appropriations. March 1994: We found that NFIP’s premium income was insufficient to meet expected future losses because of subsidized rates and suggested that Congress consider how any changes in premium rates would affect policyholder participation. September 1994: National Flood Insurance Reform Act. Developed a mitigation assistance program and expanded the mandatory purchase requirement. June 2004: Flood Insurance Reform Act. Authorized grant programs to mitigate properties that experienced repetitive flooding losses. August-October 2005: Hurricanes Katrina, Rita, Wilma. Caused $17.1 billion in NFIP claims. FEMA debt to Treasury increased to $16.9 billion in fiscal year 2006. March 2006: We added NFIP to our high-risk list. October 2008: We recommended that FEMA collect data to analyze the effect of grandfathered policies on NFIP’s fiscal exposure. November 2008: We identified three options for addressing the financial impact of subsidies: increasing mitigation efforts; eliminating or reducing subsidies; and targeting subsidies based on need. June 2011: We suggested that Congress allow NFIP to charge full- risk premium rates to all property owners and provide assistance to some categories of owners to pay those premiums. July 2012: Biggert-Waters Flood Insurance Reform Act. Required FEMA to increase rates for certain subsidized properties and grandfathered properties; create a NFIP reserve fund; and improve flood risk mapping. October 2012: Superstorm Sandy. Caused $8.8 billion in NFIP claims. FEMA debt to Treasury increased to $24 billion in fiscal year 2013. February 2013: We added limiting the federal government’s fiscal exposure by better managing climate change risks to our high-risk list. July 2013: We recommended that FEMA obtain elevation information to determine full-risk rates for subsidized policyholders. March 2014: Homeowner Flood Insurance Affordability Act. Reinstated certain rate subsidies removed by the Biggert-Waters Flood Insurance Reform Act of 2012; established a new subsidy for properties that are newly mapped into higher-risk zones; restored grandfathered rates; and created a premium surcharge that would be deposited into the NFIP reserve fund. October 2014: We recommended that FEMA amend NFIP minimum standards for floodplain management to encourage forward-looking construction and rebuilding efforts that reduce long-term risk and federal exposure to losses. July 2015: We recommended that the Mitigation Framework Leadership Group establish an investment strategy to identify, prioritize, and guide federal investments in disaster resilience and hazard mitigation-related activities. August-October 2016: Hurricane Matthew and Louisiana floods. Caused $3.1 billion in NFIP claims. FEMA debt to Treasury debt increased to $24.6 billion in early fiscal year 2017. April 2017: We suggested that Congress make comprehensive reforms to NFIP that include actions in six areas: (1) addressing the debt; (2) removing legislative barriers to full-risk premium rates; (3) addressing affordability; (4) increasing consumer participation; (5) removing barriers to private-sector involvement; and (6) protecting NFIP flood resilience efforts. August-September 2017: Hurricanes Harvey, Irma, and Maria. Caused $10 billion in NFIP claims. FEMA reached the limit of its Treasury borrowing authority of $30.4 billion. September 2017: NFIP’s last long-term authorization ended, resulting in a string of short-term reauthorizations. October 2017: Congress canceled $16 billion of NFIP’s debt to enable FEMA to continue paying flood claims. This reduced FEMA’s debt to Treasury to $20.5 billion. March 2020: FEMA’s debt to Treasury remained at $20.5 billion. September 2020: NFIP’s current short-term authorization ends. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Patrick Ward (Assistant Director), Christopher Forys (Analyst in Charge), Emily Bond, Christina Cantor, William Chatlos, Eli Dile, Lijia Guo, Holly Halifax, Laura Ann Holland, Yann Panassie, Stephen Ruszczyk, Jessica Sandler, Joseph Silvestri, Jena Sinkfield, and Kelsey Wilson made key contributions to this report.
Why GAO Did This Study NFIP has faced significant financial challenges over the years, highlighted by a rise in catastrophic flood events and its $20.5 billion debt to Treasury. Contributing to these challenges are repetitive loss properties—those that have flooded and received a claim payment multiple times. Acquiring and demolishing these properties is one alternative to paying for repeated claims, but questions exist about the cost, efficiency, and effectiveness of this approach. GAO was asked to review FEMA's property acquisition efforts as a means of addressing NFIP's financial challenges. This report examines (1) funding programs available for acquisitions, (2) FEMA's flood mitigation efforts, and (3) factors contributing to NFIP's fiscal exposure. To conduct this work, GAO reviewed FEMA guidance and other documentation; analyzed FEMA data sets related to NFIP policies and claims, repetitive loss properties, and mitigation projects; and interviewed FEMA officials. What GAO Found The Federal Emergency Management Agency (FEMA) administers three grant programs that can fund efforts to mitigate the flood risk of properties insured by the National Flood Insurance Program (NFIP). Together, these three programs funded $2.3 billion in mitigation projects from fiscal years 2014 through 2018. The largest program's funding is tied to federal recovery dollars following presidential disaster declarations, while the other two programs are funded each year through congressional appropriations. States and localities generally must contribute 25 percent of the cost of a mitigation project, but some other federal program funds can be used for that purpose. One example of such a project is property acquisition—purchasing a high-risk property from a willing property owner, demolishing the structure, and converting the property to green space. From 1989 to 2018, FEMA has helped states and localities mitigate more than 50,000 properties; however, the number of nonmitigated repetitive loss properties (generally meaning those that flooded at least twice in 10 years) has grown. Mitigation efforts varied by state. Property acquisition accounted for about 80 percent of mitigated properties nationwide, but, in some states, elevation (raising a structure) was more commonly used. In addition, some states (e.g., Missouri and North Carolina) mitigated a high number of properties relative to their numbers of repetitive loss properties, while others (Florida, New York, Louisiana, and Texas) mitigated a low number. While these efforts can reduce flood risk and claim payments, the federal government's fiscal exposure from NFIP remains high because premium rates do not fully reflect the flood risk of its insured properties. NFIP has experienced several catastrophic flood events in recent years, and the frequency and severity of floods is expected to increase. However, NFIP's premium rates have not provided sufficient revenue to pay claims. As a result, FEMA still owed Treasury $20.5 billion as of March 2020, despite Congress cancelling $16 billion of debt in 2017. As GAO has reported in the past (GAO-17-425), Congress will need to consider comprehensive reform, including mitigation and structural changes to premium rates, to ensure NFIP's solvency. What GAO Recommends GAO suggested in GAO-17-425 that Congress make comprehensive reforms to NFIP to improve the program's solvency. Given NFIP's continued debt growth, GAO maintains that comprehensive reform warrants consideration.
gao_GAO-20-219
gao_GAO-20-219_0
Background TSOs and Agency Roles for Their Training TSA is the primary federal agency responsible for implementing and overseeing the security of the nation’s civil aviation system and, in general, is responsible for ensuring that all passengers and belongings transported by passenger aircraft to, from, within, or overflying the United States are adequately screened. Over 43,000 TSOs stationed across the nation’s approximately 440 commercial airports are responsible for inspecting individuals and belongings to deter and prevent passengers from bringing prohibited items on board an aircraft or into the airport sterile area. Within TSA, two offices—T&D and Security Operations—are to work together to manage TSOs and ensure their training is current and relevant. T&D is responsible for developing initial and ongoing training curricula for TSOs based in part on TSA’s standard operating procedures that govern how TSOs screen passengers and baggage. Security Operations is responsible for allocating TSO staff to airports, and scheduling TSO work hours and training availability. Within Security Operations, FSDs are responsible for overseeing security operations at the nation’s commercial airports, many overseeing multiple airports within a specific geographic area. FSDs report to one of three executive directors, who in turn are responsible for annually assessing FSD performance, including oversight of TSO training. TSO Training Requirements TSA’s screener training is comprised of a compendium of courses that includes basic training for initial hires, recurrent training, remedial training, and return-to-duty training. The National Training Plan specifies annual training requirements and contains the core curriculum for TSOs, including the classes and hours required for TSOs to complete. In accordance with the Aviation and Transportation Security Act, screeners must complete a minimum of 40 hours of classroom instruction and 60 hours of on-the-job training, and must successfully complete an on-the- job training examination. Until 2016, new TSOs completed these training requirements at or near their home airports through the New Hire Training Program. In January 2016, TSA centralized this training under the TSO Basic Training Program at the TSA Academy in Glynco, Georgia. Further, in August 2018, TSA launched the first phases of TSO Career Progression, in which new hire screeners receive local training and gain experience in a limited number of screening functions before advancing to the next stage of training at the TSA Academy, roughly around the four-month mark. In 2015, in response to the DHS Office of Inspector General covert test findings that highlighted areas of concern in the passenger screening process, TSA implemented a TSO re-training effort, beginning with a nationwide training called “Mission Essentials—Threat Mitigation.” According to TSA, this training provided the opportunity for the TSO workforce to become familiar with the threat information that underlies TSA’s use of checkpoint technologies and operational procedures to mitigate risks. Federal Training Evaluation Requirements and Training Evaluation Models In 2009, OPM developed and published regulations that require agencies to evaluate training programs annually. According to the regulations, these training evaluations are to help agencies determine how well such plans and programs contribute to mission accomplishment and meet organizational performance goals. One commonly accepted training evaluation model, endorsed by OPM and commonly used in the federal government to evaluate training, is known as the Kirkpatrick model. The Kirkpatrick model consists of a four-level approach for soliciting feedback from training course participants and evaluating the impact the training had on individual development, among other things. The following describes what each level within the Kirkpatrick model is to accomplish: Level 1: The first level measures the training participants’ reaction to, and satisfaction with, the training program. A level 1 evaluation could take the form of a course survey that a participant fills out immediately after completing the training. Level 2: The second level measures the extent to which learning has occurred because of the training effort. A level 2 evaluation could take the form of a written exam that a participant takes during the course. Level 3: The third level measures how training affects changes in behavior on the job. Such an evaluation could take the form of a survey sent to participants several months after they have completed the training to follow up on the impact of the training on the job. Level 4: The fourth level measures the impact of the training program on the agency’s mission or organizational results. Such an evaluation could take the form of comparing operational data before, and after, a training modification. TSA Revised Screener Training to Address Risks Identified through Covert Tests and Emerging Threats Since 2015, TSA’s T&D has developed and updated TSO training programs in response to findings from covert tests and reporting on emerging threats that identified risks to aviation security. T&D uses an online database to track results from covert tests and reporting on emerging threats, and any changes to training that T&D makes as a result. According to T&D data from May 2015 through June 2019, T&D officials reviewed 62 risks that warranted a review for a potential change to training, and 56 of the risks led officials to make training changes across its TSO curriculum. Overall, T&D made changes affecting 40 different training courses. Based on our review of TSO training curriculum from May 2015 through June 2019, we found that changes T&D made to its TSO training took many forms. In some cases, T&D changed training to place additional emphasis on a certain aspect of a current standard operating procedure or provide context on the importance of following it. For example, in 2019, T&D updated its instructor-led course—”Mission Essentials: Resolution Tools and Procedures”—to address covert tests where TSOs failed to detect simulated explosive devices hidden in bags or concealed on individuals at checkpoints. The training included a review of methods terrorists may use to plan and carry out attacks in order to emphasize the importance of following the standard operating procedure. The updated training also included leading practices for searching belongings and a discussion of issues that may affect a TSO’s ability to detect threat items hidden in belongings or on individuals. In fiscal year 2019, T&D also updated instructor-led courses on its explosives detection system for checked baggage to respond to covert test findings that TSOs failed to detect certain simulated explosive devices. The updated training included images of simulated explosives hidden in checked bags that replicated scenarios similar to the failed covert tests. In other cases, T&D developed TSO training in response to new or updated standard operating procedures for using technologies. T&D officials said that for this type of TSO training, they wait until TSA’s Requirements, Capabilities, and Analysis office updates or establishes new standard operating procedures for using new technologies and then develops training based on the revisions. For example, T&D developed TSO training to cover the differences between a prior and updated version of the standard operating procedure for screening passengers and belongings at security checkpoints. T&D included curriculum to cover the major changes in the standard operating procedure and incorporated additional training to address a covert test in which TSOs failed to detect a simulated explosive device at a screening checkpoint. T&D also developed training for TSOs who check passenger IDs and travel documents. The training focused on updates to the standard operating procedure and included procedures specific to the 2005 REAL ID Act, which TSA will fully implement in 2020. Additionally, T&D incorporated this new training to address covert tests that had found issues with identifying false or fraudulent travel documents. In addition to updating or developing new training involving instructor-led courses, TSA responded to identified risks by developing or updating job aids or briefings for TSOs. For example, TSA developed the “It’s Not the Container” briefing in 2017 to address risks highlighted by an attempted attack in Australia and included tactics used to conceal explosives in benign items. The briefing provided best practices for using screening technologies to identify concealed explosives, which aligns with current standard operating procedures. T&D also developed the “Electronics vs. Electrical Devices Job Aid”—covering how TSOs should handle the devices at checkpoints—which instructors circulated during classroom training and provided to TSOs at the screening checkpoints. TSA Uses Established Models for Updating and Evaluating TSO Training and Has Followed Leading Practices TSA uses established models and processes for updating and evaluating TSO training, and these processes follow leading practices for training and evaluation development. TSA updates its trainings using a training development process that can be segmented into five broad, interrelated elements, and is typically referred to as the ADDIE model. The elements include (1) analysis, (2) design, (3) development, (4) implementation, and (5) evaluation. In our prior work, we have found that these five elements of the ADDIE model help to produce a strategic approach to federal agencies’ training and development efforts. See figure 1 for how T&D aligns its training development process with the ADDIE model. T&D’s guidance and our prior work on federal agency training development identify various leading practice attributes for developing training. Such attributes include that the training development process: (1) is formal and based on industry recognized standards; (2) provides the ability to update training based on changing conditions and, if necessary, quickly; (3) includes mechanisms to ensure programs provide training that addresses identified needs; (4) ensures measures of effectiveness are included in training program; (5) prevents duplication of effort and allows for consistent message; (6) allows for stakeholder feedback; (7) provides for continuous evaluation of effort; and (8) includes mechanisms to ensure training programs are evaluated. We found that T&D’s training development process incorporates all of the identified leading practice attributes, as shown in table 1. Two examples of TSA’s implementation of selected leading practice attributes are that T&D (1) has methods for updating training quickly, if needed, and (2) has mechanisms to ensure TSO training is evaluated. Specifically: Methods to quickly update training. In alignment with the leading practice that agencies should have a process to enable quick updates to training to respond to changing conditions, T&D has alternative processes to develop and deliver training to TSOs faster than the approximately 6 months its standard process takes to develop or revise training. For example, in 2018, T&D formalized a set of alternative processes to rapidly develop and deliver training to TSOs. One such alternative is for T&D to use its Rapid Response process, which allows for a response time to the field of 72 hours. Additional options include the Rapid Update/Revision or Rapid Development (Priority Training) processes to allow for a new training to be issued in approximately 30 days. T&D officials said that the rapid development processes are used when an issue, such as an emerging threat, requires a response in days or weeks. T&D’s guidance outlines situations when these processes are appropriate for use and provides checklists to help T&D personnel follow key steps. Mechanisms to help ensure evaluations of training effectiveness. T&D has mechanisms for ensuring it evaluates the effectiveness of its TSO training programs. In particular, T&D uses the Kirkpatrick model to evaluate its training and, according to its policy, all of its courses are to be evaluated at Level 1 of the model, which measures training participants’ reaction to, and satisfaction with, the training. T&D is also to plan course evaluations for each training during the curriculum development process, determine the formal review cycle, and include it in the curriculum development paperwork. According to its policy, T&D must complete a curriculum review at least once every 5 years, but may do so at shorter intervals. During the curriculum review, T&D examines the training to confirm the content is valid with respect to the applicable listing of tasks and competencies, current law, policy, procedures, and equipment. As a part of this process, T&D assesses participant evaluations to determine whether changes to TSO training are needed. As of October 2019, T&D’s efforts to evaluate new or updated TSO training made from May 2015 through June 2019 are in line with its policy. For example, T&D officials said they updated participant evaluations for TSO training they changed during this time period to address risks identified by covert testing and reports on emerging threats. These officials told us that they had not yet formally analyzed the results of the evaluations. This progress is in line with T&D policy, which requires a review of each training every 5 years. We verified this by obtaining evaluations T&D collected for the six selected sample courses we reviewed. T&D provided us level 1 survey responses it had collected that measure training participants’ reaction to, and satisfaction with, the training programs for four of the courses. T&D implemented the four courses from calendar years 2015 to 2019. Based on those dates and T&D policy, T&D should complete curriculum reviews for the courses between 2020 and 2024. TSA Monitors Training Compliance, but Its Process Does Not Look for Trends across Fiscal Years and Is Not Fully Documented TSA relies on a database that both field and headquarters staff use to monitor TSO training compliance. According to TSA policy, TSA documents and maintains the training status of all TSOs across approximately 440 commercial airports through its Online Learning Center database. Within the database, TSA records training completion in three ways: 1. TSOs self-certify they completed the training activity, such as reading 2. A training staff member at a commercial airport will record training completion on behalf of a TSO for instructor-led courses and on-the- job training; 3. The database automatically records completion for training actions, such as online training. After recording training completion, the database calculates the percentage of TSOs at a given airport who are on pace for completing their required annual training. According to TSA guidance, the agency has set its annual TSO target compliance rate at 90 percent per commercial airport. While TSA has guidance outlining roles and responsibilities for training oversight at a high level, TSA headquarters and field officials told us their processes for monitoring training compliance—including analyzing training compliance data, reporting their results, and taking action to address the results—were not documented. Below are descriptions of these roles and responsibilities at the field and headquarters levels, based on what officials from each level told us. TSA personnel in the field have various responsibilities for overseeing training compliance: FSDs. FSDs, who oversee operations at one or more airports, have the primary responsibility for ensuring that TSOs within the airports they oversee have fulfilled their training requirements. FSDs are assessed on training compliance among TSOs at their respective airports during their annual performance reviews. All seven FSDs we interviewed said they use the Online Learning Center database to verify that TSOs are on track for meeting their training requirements. Further, these FSDs said they meet regularly with their on-site training staff to discuss how training is going and whether TSOs are at risk of not meeting their training requirements. Executive Directors. Executive Directors oversee the FSDs who work within their respective portfolios and discuss training compliance with the FSDs during their annual performance review. To monitor FSDs’ efforts, Executive Directors also review data from TSA’s Online Learning Center database on TSO training compliance for airports within their area of responsibility. According to an Executive Director we spoke with, if an Executive Director notices that TSO training compliance rates for an airport whose FSD they oversee are lower than the 90 percent compliance target, he or she may reach out to the FSD to obtain information on the causes and discuss an action plan to improve training compliance. TSA personnel at headquarters also have various responsibilities for overseeing training compliance: T&D. T&D officials said that on a monthly basis they analyze TSO training compliance data from TSA’s Online Learning Center database to identify how TSOs nationwide are meeting requirements and whether there may be trends that indicate a need for changes to training during the fiscal year. For example, officials told us that in fiscal year 2019 they noticed that airports were generally behind in meeting annual training requirements and determined this was due to the effects of the federal government shutdown. In response, they stated they adjusted the duration of some training courses to shorten the amount of time it would take TSOs to complete the training within the remainder of the fiscal year. Security Operations. Security Operations tracks individual airport progress toward meeting TSA’s annual 90 percent compliance target. Security Operations officials said they receive and review monthly training compliance reports from T&D. They are responsible for analyzing the data to monitor whether airports are on pace toward meeting the annual TSO training compliance target. For example, TSA has set the required training completion pace goal at 8.3 percent per month for each commercial airport—-so that by maintaining the pace, by the end of the fiscal year, TSOs at each airport will have completed their required annual training. Officials told us that if they identify instances where an airport’s overall TSO training compliance rate for a given month is below this goal during the course of a fiscal year, they will reach out to the FSD responsible. They will provide the FSD a point of contact at a comparable airport with higher compliance rates to share best practices for addressing the issue. While TSA headquarters officials from Security Operations and T&D are responsible for analyzing and addressing TSO training compliance, they focus on monthly airport progress toward the 90 percent TSO training target, rather than annual changes in compliance rates. In particular, they do not look back at prior year airport compliance data to assess whether airports did not meet the compliance target across fiscal years, and whether they require corrective action at the headquarters level. However, we reviewed annual TSO training compliance data across fiscal years for each of the 435 commercial airports that reported data from fiscal years 2016 through 2018. We found that while all airports met TSA’s 90 percent training compliance target in fiscal years 2016 and 2017, the compliance rates for five airports dropped well below 90 percent in 2018. These five airports’ TSO compliance rates dropped 15 to 26 percentage points from their reported compliance rate in 2017. T&D and Security Operations headquarters officials said they were not aware that five airports had not met TSA’s TSO training compliance target in fiscal year 2018, nor the causes for it. Headquarters officials said that they did not identify this development because their focus is on monthly nationwide trends, rather than instances of noncompliance at individual airports across fiscal years, which field officials would be responsible for addressing. However, unlike headquarters officials, field officials do not have the visibility to identify if or when such noncompliance may be occurring across other commercial airports; and whether it may indicate a broader issue. For example, the five airports whose TSO compliance rates dropped significantly between fiscal years 2017 and 2018 varied by size and location. As a result, FSDs and Executive Directors would generally not have been aware that other airports experienced noncompliance or been in a position to determine whether the noncompliance was due to related reasons. Based on TSA’s process, TSA headquarters officials from T&D and Security Operations are best positioned to identify training compliance trends and their causes when they occur, as they have visibility into training compliance data across the agency in a way that field officials do not. Headquarters officials from T&D and Security Operations told us the field- level processes for overseeing training compliance are not documented because TSA has intentionally given field officials the flexibility to manage TSO workload and training to meet the individual needs of their airports. They said they did not document their processes at the headquarters level because they already understood what to do and were not required to document the analysis results. However, the headquarters officials said there may be a benefit to documenting the headquarters process to ensure consistency in how they carry out the process in the event of attrition. Standards for Internal Control in the Federal Government calls for agencies to develop and maintain documentation of their internal control system. This documentation allows management to retain organizational knowledge and communicate that knowledge to external parties. This documentation of controls is also evidence that controls are identified, can be communicated to those responsible for their performance, and can be monitored and evaluated by the entity. Moreover, internal control standards state that internal control monitoring should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations to ensure that known weaknesses are resolved. By documenting its headquarters process for monitoring TSO training compliance—including its process for analyzing monthly training compliance data, the results of its analyses, and actions taken in response—TSA could better ensure its headquarters staff are aware of their responsibilities for overseeing TSO training compliance and consistently carry these responsibilities out as staff change over time. Additionally, by monitoring for instances of TSO noncompliance at individual airports across fiscal years in its analysis of training compliance data, TSA headquarters would be better positioned to determine whether they constitute a trend warranting corrective action at the headquarters level. Conclusions TSOs’ ability to perform their duties effectively in screening passengers and their belongings is crucial to the security of the nation’s aviation system. While TSA has made updates to its TSO training programs to address risks identified in covert testing, additional actions could improve its processes for monitoring TSO training compliance so that the agency can identify and address any potential training issues. In particular, by documenting its process for monitoring TSO training compliance— including those for analyzing monthly training compliance data, reporting the results of its monitoring efforts, and taking action to address potential issues—TSA could help ensure that all of the various officials responsible for monitoring training compliance, including new staff over time, understand the process and can consistently implement it. Further, by monitoring for instances of airport TSO non-compliance across fiscal years in its analysis of training compliance data, TSA would be better positioned to ensure that it is aware of potential trends so it may determine whether corrective action at the headquarters level is warranted. Recommendations for Executive Action We are making the following two recommendations to TSA: The TSA Administrator should direct T&D and Security Operations to document their processes for monitoring TSO training compliance— including those for analyzing training compliance data, reporting the results from their analysis, and actions taken to address the results. (Recommendation 1) The TSA Administrator should direct T&D and Security Operations to monitor for instances of TSO non-compliance by individual commercial airports across fiscal years that could potentially warrant corrective action at the headquarters level. (Recommendation 2) Agency Comments We provided a draft of our report to DHS for review and comment. In its comments, reproduced in appendix I, DHS concurred with both of our recommendations. DHS also provided technical comments, which we incorporated as appropriate. With respect to our first recommendation that TSA document its process for monitoring TSO training compliance, DHS stated that, among other things, Security Operations will collaborate with T&D to develop and maintain an internal control mechanism that will document responsibilities at the field and headquarters level for monitoring TSO training completion compliance, and actions taken to address the results. With respect to our second recommendation that TSA monitor for instances of TSO noncompliance by individual commercial airports across fiscal years, DHS stated that T&D and Security Operations will begin monitoring trends in non-compliance at individual airports and for specific courses. Further, T&D has developed an internal website to share its findings with Security Operations through monthly compliance reports. We are sending this report to the appropriate congressional committees and to the acting Secretary of Homeland Security. In addition, this report is available at no charge on the GAO website at http://gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8777 or russellw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Jason Berman (Assistant Director), Julia Vieweg (Analyst-in-Charge), Benjamin Crossley, Elizabeth Dretsch, Michael Dworman, Eric Hauswirth, Susan Hsu, Tom Lombardi, and Heidi Nielson made key contributions to this report.
Why GAO Did This Study TSA is responsible for screening millions of airline passengers and their baggage each day at the nation's commercial airports for items that could threaten aircraft and passengers. In carrying out its mission, TSA requires its screener workforce to complete various trainings on screening procedures and technologies. TSA updated its security screening procedures and technologies in recent years to address risks identified through covert tests in 2015 and reports of emerging threats. The TSA Modernization Act of 2018 included a provision for GAO to examine the effectiveness of TSA's updated screener training. This report addresses: (1) changes TSA made to screener training since 2015; (2) how TSA updates and evaluates screener training; and (3) how TSA ensures screener compliance with training requirements. GAO analyzed TSA documentation on training development, compliance monitoring, and a non-generalizable sample of six recently updated training courses—selected to reflect a range of training types and topics. GAO also reviewed TSA data on airport screener training compliance rates from fiscal years 2016 through 2018, and interviewed TSA officials. What GAO Found Since 2015, the Department of Homeland Security's (DHS) Transportation Security Administration (TSA) developed and updated screener training to address potential risks to commercial airports identified through covert testing and reports on emerging threats. From May 2015 through June 2019, TSA identified 62 potential risks that warranted review for a potential change in training. TSA made training changes in response to 56 of the identified risks—affecting 40 different training courses. TSA also responded to risks by developing or updating job aids or briefings for screeners. TSA uses established models for developing, updating, and evaluating its screener training. The figure below shows TSA's process for updating and evaluating its screener training, in accordance with a training development model that is widely accepted and used across the federal government. TSA relies on an online database to monitor screener compliance in completing required training at the nation's commercial airports. However, TSA has not documented its process for monitoring screener training compliance, including for analyzing compliance data and reporting and addressing instances of noncompliance at airports. Moreover, while TSA monitors airport compliance rates in a given year, it does not analyze the data across fiscal years for potential trends in noncompliance by individual airports that may warrant corrective action at the headquarters level. GAO found that in fiscal years 2016 and 2017, screeners at 435 commercial airports met TSA's 90 percent target compliance rate, while in 2018, five airports had compliance rates well below this target, dropping 15 to 26 percentage points from the prior year. TSA officials stated they were unaware of this development. By documenting its screener training compliance monitoring process and monitoring screener training compliance data across fiscal years, TSA would be better positioned to ensure it is aware of potential noncompliance trends warranting corrective action at the headquarters level. What GAO Recommends GAO is making two recommendations, including that TSA (1) document its process for monitoring screener training compliance and (2) monitor screener compliance data across fiscal years. DHS concurred with the recommendations.
gao_GAO-20-534
gao_GAO-20-534_0
Background Intimate Partner Violence Data from CDC’s 2015 NISVS indicate that about 43.6 million women (36.4 percent) and 37.3 million men (33.6 percent) in the United States have experienced sexual violence, physical violence, and stalking by an intimate partner. Approximately 21.4 percent of women and 14.9 percent of men in the United States experienced severe physical violence by an intimate partner. About 30 million women (25.1 percent) and 12 million men (10.9 percent) reported experiencing some effect from the violence. (See fig. 1 for the most commonly reported effects of intimate partner violence, as reported by NISVS.) Intimate partner violence can also result in death. Data from U.S. crime reports suggest that 16 percent of homicide victims (about one in six) are killed by an intimate partner. Strangulation victims, in particular, are at greater risk for being killed, according to the Training Institute on Strangulation Prevention. Research has shown that certain factors increase the risk that someone may experience intimate partner violence. For example, a review of research on risk factors for women who experience intimate partner violence identified younger age, less education, unemployment, pregnancy, childhood victimization, and mental illness as being associated with higher rates of intimate partner violence. Exposure to intimate partner violence between a child’s parents or caregivers is also associated with a greater risk of intimate partner violence in adulthood, according to CDC. Adults with disabilities are also at a higher risk of violence than those without disabilities. However, research indicates that victims of intimate partner violence may be less likely than others to obtain medical or other services. Even when services are obtained, victims may be less likely than others to identify the source or extent of their injuries out of fear for their safety or reprisal. Brain Injuries Brain injuries, including those that may result from intimate partner violence, can have several causes, including physical trauma and strangulation; range in severity; and can result in a number of health consequences. TBI refers to a brain injury caused by external physical force, such as a blow to the head or shaking of the brain. Anoxic (a complete disruption of oxygen to the brain) or hypoxic (a partial disruption of oxygen to the brain) brain injury may result from strangulation or other pressure applied to the neck that restricts blood flow and air passage. TBIs and anoxic or hypoxic brain injuries may result in irreversible psychological and physical harm. Specifically, people who suffer from TBI and anoxic or hypoxic brain injuries may experience cognitive symptoms, including depression and memory loss, as well as behavioral symptoms, such as changes in mood, or difficulty sleeping, among others. The symptoms individuals experience can also vary. The signs and symptoms of an anoxic or hypoxic brain injury from strangulation can be similar to those of mild TBI, which is often referred to as a concussion. (See fig. 2.) According to the Brain Injury Association of America, a severe brain injury can be clearly identified by reviewing an individual’s symptoms, but when the brain injury is mild or moderate, providers may need to conduct further assessments or screening to diagnose the brain injury. According to NIH, providers have several options for assessing brain injury that can help determine the severity of the injury. For example, providers may evaluate a person’s level of consciousness and the severity of brain injury by attempting to elicit body movements, opening of the eyes, and verbal responses. Providers may also evaluate an individual’s speech and language skills or cognitive capabilities. Role of HHS and DOJ in Addressing Intimate Partner Violence Both HHS and DOJ support activities for individuals affected by intimate partner violence through several of their agencies. Within HHS, for example, ACF provides federal funding to support emergency shelter and services for the victims of domestic violence and their dependents, as well as the National Domestic Violence Hotline. CDC provides grants to state and local entities to develop programs aimed at preventing intimate partner violence. Additionally, HRSA—which provides funding to federally qualified health centers—provides funding to develop educational materials for health care workers, in partnership with ACF, to increase the number of individuals screened for intimate partner violence and referred to treatment services, among other things. DOJ, through its Office of Justice Programs and Office on Violence Against Women, conducts research and provides funding to help states, local governments, and nonprofit organizations’ develop programs to reduce violence against women. Many DOJ programs aim to strengthen responses at the local, state, tribal, and federal levels to domestic violence, dating violence, sexual assault, and stalking. Further, the Violence Against Women Reauthorization Act of 2013 amended federal laws to establish criminal penalties for strangulation or suffocation. Additionally, DOJ increased its support of activities focused on training to recognize and prosecute strangulation. Role of HHS in Addressing TBI HHS agencies also conduct work related to recognizing and responding to TBI. For example, NIH funds research aimed at developing knowledge about the brain and nervous system in order to reduce the effect of brain- related diseases on individuals. In addition, CDC conducts research on the prevention of TBIs, and ACL provides grants to states to help them to support individuals with brain injuries and to promote the rights of, and provide advocacy support to, those living with TBI. Efforts to Provide Education, Screen for, or Treat Brain Injuries Resulting from Intimate Partner Violence We identified 12 initiatives led by non-federal entities that focused on (1) education on brain injuries resulting from intimate partner violence by developing materials or offering training; (2) screening victims of intimate partner violence for potential brain injuries; or (3) treatment involving individuals with brain injuries resulting from intimate partner violence. Our list represents initiatives identified during the course of our review and may not be exhaustive. Some of these initiatives focus on only TBI or strangulation, while others focused on both. See appendix II for additional information on the initiatives. Training for Domestic Violence Program Staff Domestic violence program advocates we spoke to told us that before they participated in the Ohio Domestic Violence Network training, they knew their clients were having a hard time remembering things or getting their thoughts across; however, they did not know this could be the result of a brain injury. The training helped advocates identify signs and symptoms in their clients and make others aware of these symptoms. For example, advocates told us they may inform a prosecutor that a client may have a brain injury and may have difficulty remembering or sharing their experiences. The Ohio Domestic Violence Network—as a part of its Connect, Acknowledge, Respond, Evaluate (CARE) initiative—trained staff at five domestic violence programs on brain injuries, and developed educational materials for shelter staff to share with intimate partner violence victims, according to network officials. For example, we spoke to staff at a domestic violence program in Ohio who told us how the education they received from the network helped them identify the signs and symptoms of brain injury in their clients. Staff from another domestic violence program in Ohio told us as a result of CARE training they now suggest strategies to clients to assist them with their memory issues, such as writing appointment information on a whiteboard or in a planner. The Swedish Hospital Violence Prevention Program, in Illinois, provided education to physicians, medical residents, and hospital staff to increase health care provider and staff awareness of and ability to respond to brain injuries among victims of intimate partner violence, according to officials with the initiative. The Safe Futures initiative, in Connecticut, developed strangulation training materials for emergency medical personnel, law enforcement, prosecutors, and providers, as well as hosted trainings throughout Connecticut on intimate partner violence and brain injuries, according to officials with the initiative. Screening. Six of the 12 initiatives used screening tools to identify potential brain injuries among intimate partner violence victims, according to officials. Based on our review of documentation from these initiatives, we found that the screening tools generally had a series of questions about injuries to the head, the loss of consciousness, or behavior changes—symptoms that may indicate a potential brain injury. For example: Officials from three initiatives that screened victims for potential brain injuries reported using a version of the HELPS screening tool. (See fig. 4 for an example of a modified version of this screening tool used by one initiative.) Officials from one initiative told us that screening typically occurred at domestic violence shelters where staff and advocates receive training on how to screen intimate partner violence victims. Officials from the other three initiatives told us they developed their own screening methods. For example, staff at the Maricopa County Collaboration on Concussions in Domestic Violence in Arizona screen victims using a tool that measures near point of convergence, which refers to an individual’s ability to focus both eyes on a target, an approach that can be used to detect a concussion. Police officers from two participating departments in Arizona have used this tool to screen individuals when they respond to a domestic violence call, according to officials with the collaboration. Treatment. Two of the 12 initiatives included a treatment component. Officials with the Barrow Concussion and Brain Injury Center in Arizona and the Northside Hospital Duluth Concussion Institute in Georgia told us they provided treatment to victims who were referred by local domestic violence shelters. Providers affiliated with one of these initiatives told us that treatment for brain injuries resulting from intimate partner violence does not differ from treatment for other brain injuries. A provider with one of these initiatives said that treatment could include exercises and movements that decrease dizziness, vertigo, and imbalance; occupational, physical, or speech therapies; or treatment for pain management. An Intimate Partner Violence Victim’s Brain Injury Treatment Jane Doe was abused by her partner. An advocate at a domestic violence shelter screened Jane for a brain injury and referred her for assessment. She was diagnosed and began treatment for a brain injury. Jane Doe told us that the treatments she received, which included nerve blockers—often used by neurologists to lessen chronic pain—helped to relieve the persistent headaches and debilitating migraines she experienced in the aftermath of her abuse. She told us that as a result of the treatment she received, she feels better able to function. Officials from the Barrow Concussion and Brain Injury Center told us that individuals with brain injuries resulting from intimate partner violence may face a longer period of recovery compared to others with brain injuries, in part, because of living in unsafe home environments. As a result, special considerations are sometimes needed due to additional barriers faced by domestic violence victims. For example: Victims may need safety planning and housing. As a part of the Barrow Concussion and Brain Injury Center’s domestic violence initiative, a social worker will help ensure that victims’ other needs are met. Officials from the Northside Hospital Duluth Concussion Institute noted that transportation could also be a barrier for victims of intimate partner violence. As such, the Georgia Department of Public Health’s Injury Prevention Program, which partnered with the Northside Hospital Duluth Concussion Institute, planned to use CDC grant funding to provide domestic violence victims transportation from area shelters to the concussion institute for treatment. Officials from the Barrow Concussion and Brain Injury Center also told us about other considerations, such as the need to have a flexible appointment policy to account for the possibility of victims missing or canceling appointments. Of the 12 initiatives we identified, eight received federal grants from HHS or DOJ, while officials from the other four initiatives told us they were funded with state, local, or private dollars. According to HHS and DOJ officials, the grants did not have specific requirements to address the intersection of brain injuries and intimate partner violence. However, based on our review of documentation, the eight initiatives used the federal funds to focus on the intersection of these two issues. Six of these eight initiatives received funding from HHS. Of them, four were funded by HRSA or ACL grants that focused on TBI-related services and activities, and two were funded by CDC grants focused on injury and violence prevention activities. The other two initiatives were funded by DOJ’s Office of Justice Programs through grants that provide funds to support victims of crime. In addition to the federal funding received by some of the 12 initiatives, we identified other efforts and grants funded by HHS and DOJ. These efforts made educational materials on intimate partner violence and brain injuries accessible online, made ad-hoc or internal trainings available to external parties, or provided education that touched on the connection between intimate partner violence and brain injury, according to HHS and DOJ officials. For example: ACF has funded the National Resource Center on Domestic Violence and Futures Without Violence’s National Health Resource Center on Domestic Violence, which provide information related to intimate partner violence and brain injuries via websites. ACF, in collaboration with HRSA, funded an effort led by Futures Without Violence, which includes some information on TBI and strangulation in trainings for select state leadership teams working to address intersections of health, intimate partner violence, and human trafficking. DOJ’s Office on Violence Against Women provided grant funds to support the Training Institute on Strangulation Prevention, which offers training to individuals and outside entities to help them understand, recognize, and appropriately serve strangulation victims, as well as investigate and prosecute strangulation cases. DOJ’s Office on Violence Against Women has also provided grant funds used by local organizations, such as police departments, to provide ad-hoc or internal training activities on brain injuries and to serve victims with brain injuries, including those caused by strangulation. Data on the Overall Prevalence of Brain Injuries Resulting from Intimate Partner Violence Are Limited; Improved Data Could Help Target HHS Public Health Efforts Based on our review of the literature, as well as interviews with HHS officials and other non-federal stakeholders, we found that data on the overall prevalence of brain injuries resulting from intimate partner violence are limited. Specifically, available data do not provide an overall estimate of the prevalence of brain injuries resulting from intimate partner violence nationwide. While there are studies that estimate the prevalence of these injuries, these studies are also limited. Specifically, among the 28 articles we reviewed, six included an objective to estimate the prevalence of brain injuries resulting from intimate partner violence, while the remaining 22 articles examined other areas, such as health effects or awareness of brain injuries resulting from intimate partner violence, but did not have an objective to estimate prevalence. The six articles are also specific to a certain subpopulation or certain geographic locations and used different approaches to identify individuals with brain injuries. As a result, the range of reported prevalence rates on victims of intimate partner violence with brain injuries (brain injuries caused by trauma or strangulation) varied greatly (from 11 percent to about 79 percent) and were based on a range of sample sizes, from 95 people to about 1,000 people. HHS agencies also have some data collection and research efforts related to this issue; however, these efforts are limited as well. For example, CDC and NIH have efforts that may assist in better understanding the connection between brain injuries and intimate partner violence, but CDC’s efforts do not account for all causes of brain injuries and NIH has only one study focused on this connection. Further, HHS agencies treat brain injuries and intimate partner violence as separate public health issues and pursue their efforts separately—which limits their ability to better understand the connection between the issues and the overall prevalence of brain injuries that result from domestic violence. CDC officials told us that the agency’s data on the connection between brain injuries and intimate partner violence are limited, but the agency plans to address some of the limitations. For example, the officials said CDC analyzes health care claims data from emergency department visits to determine the causes of TBI. However, CDC officials told us that these data likely underestimate TBI among victims of intimate partner violence, because many do not seek medical care; for domestic violence victims who seek care, providers are unlikely to designate the individual as a victim of intimate partner violence. CDC also collects data on intimate partner violence through its NISVS. According to CDC reports, NISVS data are a key source of information on intimate partner violence, but the survey does not collect data on all types of brain injuries related to intimate partner violence. For example, the NISVS estimates the prevalence of victims of intimate partner violence who have been “knocked out after getting hit, slammed against something, or choked.” However, published estimates are based on responses to a survey question that asks individuals about being “knocked out,” which is a colloquial term commonly used to indicate a loss of consciousness. CDC officials stated that in most known incidents of mild brain injury, people do not lose consciousness. As a result, NISVS data likely understate the number of intimate partner violence victims who may have brain injuries. In order to better estimate TBIs resulting from intimate partner violence, CDC officials told us they plan to add a survey question to the NISVS to ask respondents about whether they have experienced a concussion—a common term for mild forms of TBI—due to a current or ex-partner. CDC officials told us that they have begun initial testing on several aspects of the survey, including on the additional question with the goal to begin data collection by the end of 2022, plans which are pending approval. Once the NISVS data are collected and analyzed, CDC officials said the data could help them provide a nationally representative prevalence estimate of intimate partner violence victims’ who experienced a TBI in their lifetimes. However, adding the question to the NISVS may not ensure that these data can provide a comprehensive estimate of the prevalence of brain injuries resulting from intimate partner violence. In particular, The NISVS question will focus on TBIs, and will not account for individuals with brain injuries caused by strangulation. According to educational materials developed by the Training Institute of Strangulation Prevention and used by HRSA in the training of providers and advocates, more than two-thirds of intimate partner violence victims are strangled at least once. CDC officials told us that they are able to measure acts of choking or suffocation through the NISVS, but this measure cannot be used to account for brain injuries resulting from strangulation. Additionally, CDC officials told us that the agency’s priority is to focus on TBI specifically rather than accounting for other brain injuries. Despite the focus on TBIs, CDC officials told us the NISVS data are not designed to examine whether intimate partner violence is a leading cause of TBI in comparison with other causes, such as sports or motor vehicle crashes. CDC officials said that some research and NISVS data suggest that intimate partner violence is not as large a contributor of TBIs when compared to other contributors. However, they noted that they do not have data on the proportion of TBIs resulting from intimate partner violence. Absent the ability to compare intimate partner violence as a cause of TBI against other contributors through the NISVS or other representative studies, CDC officials will continue to lack an understanding of the full scope of TBIs, their primary causes, and who is affected by them. NIH officials identified two agency efforts that could help improve what is known about the connection between brain injuries and intimate partner violence. NIH began funding a study in September 2019 that will use advanced brain imaging, blood analyses, and cognitive and psychological testing to study the effects of multiple brain injuries on women subjected to intimate partner violence. The objectives of the study are not to measure prevalence, but to examine the health effects of brain injuries resulting from intimate partner violence. NIH officials told us that this is the first study funded by NIH using brain images to investigate brain injuries resulting from intimate partner violence. NIH is also developing blood biomarkers—which are clinical diagnosis tools—for identifying mild TBI. Currently, mild TBI is generally diagnosed by taking an inventory of symptoms, but symptoms can lead to misdiagnoses, including for mental illness or a substance use disorder. NIH officials said they are in the initial stages of developing these biomarkers, which could take the place of screening tools in diagnosing a brain injury. While this effort was not initiated to better understand brain injuries among victims of intimate partner violence, biomarkers have the potential to improve the identification of TBIs, provided they are applied to domestic violence victims. Two other HHS agencies—ACL and HRSA—also have efforts that address brain injuries or intimate partner violence. However, these agencies’ efforts are generally not focused on the connection of the two issues, so they are not likely to result in more complete data on the prevalence of brain injuries resulting from intimate partner violence. Specifically: ACL provides grants to states to establish support services for individuals with brain injuries through its TBI State Partnership Program. As part of these efforts, ACL officials told us that they have begun to gather information to determine how many TBI grant recipients are using the funds to support particular populations, including individuals with TBI resulting from intimate partner violence. As of December 2019, ACL officials told us that two states (Idaho and Iowa) have used the grants to focus on individuals with TBI as a result of intimate partner violence. HRSA has proposed an effort to collect data that may assist in further understanding the health consequences of intimate partner violence. As part of its strategy to address intimate partner violence, HRSA officials recently began requiring federally qualified health centers to capture International Classification of Diseases-10 codes for intimate partner violence on health care claims beginning in 2020. This effort is not aimed at the intersection of intimate partner violence and brain injuries; the purpose of this data collection is to better understand the effect of intimate partner violence on victims’ health outcomes. While these data may currently underestimate the number of individuals affected by intimate partner violence, HRSA officials told us that their goal in collecting these data is to underscore the significance of intimate partner violence and help position providers to assist victims. Further, knowing the prevalence of brain injuries resulting from intimate partner violence and using these data could help officials further target education campaigns to providers on the potential injuries associated with intimate partner violence. Officials from HHS agencies acknowledge that the lack of prevalence data on brain injuries resulting from intimate partner violence is a challenge in addressing the intersection of these issues. However, HHS and its agencies do not have a plan for how they would collect better prevalence data, including a plan that specifies the extent to which HHS agencies should collaborate on data collection efforts. Although HHS agencies have some efforts underway, these efforts are limited or do not examine the connection between the issues. For example, CDC is working to add a question to NISVS to improve what is known about the prevalence of TBIs among victims of intimate partner violence; however, this effort overlooks brain injuries resulting from strangulation—which HRSA reports is often also experienced by these victims—because CDC’s priorities are to focus on TBIs specifically. Further, the newly funded NIH study is not intended to estimate the overall prevalence of brain injuries resulting from intimate partner violence. Having complete data on the prevalence of brain injuries resulting from intimate partner violence could strengthen HHS’s efforts to address this public health issue. HHS and its agencies acknowledge that enhancing the health and well-being of Americans is critical to their public health mission and intimate partner violence and TBIs are both prominent injury and violence issues. As part of this mission, CDC uses its Public Health Approach to guide its public health related efforts. The first step of this approach is to define the problem, which includes collecting prevalence data to understand the magnitude of the problem, where the problem exists, and whom it affects. According to CDC, such data are critical to ensuring that resources are focused on the individuals most in need. Collecting data on the prevalence of brain injuries resulting from intimate partner violence is a critical first step. With better data comes a better understanding of the overall prevalence of brain injuries resulting from intimate partner violence. This would give HHS and its agencies the information necessary to inform their efforts and allocate resources, including grant funding, to address victims of brain injuries resulting from intimate partner violence. Conclusions Intimate partner violence affects over 30 percent of women and men in the United States, and research has raised concerns about brain injuries sustained by these domestic violence victims. Officials from HHS agencies acknowledge the lack of overall prevalence data on brain injuries resulting from intimate partner violence and the adverse effect this lack of data has on understanding the intersection of these two issues. While HHS agencies have some efforts underway to address brain injuries and intimate partner violence, they are limited and address these issues separately. Therefore, HHS and its agencies have missed an opportunity to improve their public health efforts to address this issue, particularly the prevalence of the problem, where the problem exists, and whom it affects. By working together, HHS and its agencies can identify ways that each agency’s efforts could result in better prevalence data and a better overall understanding of brain injuries resulting from intimate partner violence. Improved data, in turn, could also help ensure that federal resources are allocated to the appropriate areas and used as efficiently and effectively as possible to address this public health issue. Recommendation We are making the following recommendation to HHS: The Secretary of HHS should develop and implement a plan to improve data collected on the prevalence of brain injuries resulting from intimate partner violence and use these data to inform HHS’s allocation of resources to address the issue. (Recommendation 1) Agency Comment We provided a draft of this report to HHS and DOJ for review and comment. In its written comments (reproduced in app. III), HHS concurred with our recommendation and noted that it is coordinating a plan amongst its relevant agencies to augment data collection on the prevalence of brain injuries resulting from intimate partner violence. HHS noted that these data will continue to inform the needs of this vulnerable population. HHS and DOJ also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, Secretary of Health and Human Services, Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Description of Literature Review and Bibliography We identified articles for our literature review through a search of bibliographic databases, including Harvard Library Think Tank Search, MEDLINE, and Scopus, using terms such as “intimate partner violence,” “domestic violence,” “traumatic brain injury,” and “strangulation.” We determined there were 57 relevant articles from 2009 through August 2019 discussing brain injuries resulting from intimate partner violence. We reviewed the 57 articles to examine brain injuries resulting from intimate partner violence, including background information on the concerns of brain injuries resulting from intimate partner violence and challenges that researchers may have identified in conducting this work. Of the 57 articles, we identified 28 that had conducted their own data analyses. We analyzed these 28 articles to examine data on prevalence rates, as well as research on health effects, treatment, and screening tools for identifying brain injuries resulting from intimate partner violence. The following articles are based on an original analysis of data. Brown, Joshua, Dessie Clark, and Apryl E. Pooley. “Exploring the Use of Neurofeedback Therapy in Mitigating Symptoms of Traumatic Brain Injury in Survivors of Intimate Partner Violence.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 6 (2019): 764-783. Campbell, Andrew M., Ralph A. Hicks, Shannon L. Thompson, and Sarah E. Wiehe. “Characteristics of Intimate Partner Violence Incidents and the Environments in Which They Occur: Victim Reports to Responding Law Enforcement Officers.” Journal of Interpersonal Violence (2017): 1-24. Campbell, Jacquelyn C., Jocelyn C. Anderson, Akosoa McFadgion, Jessica Gill, Elizabeth Zink, Michelle Patch, Gloria Callwood, and Doris Campbell. “The Effects of Intimate Partner Violence and Probable Traumatic Brain Injury on Central Nervous System Symptoms.” Journal of Women’s Health, vol. 27, no. 6 (2018): 761-767. Cimono, Andrea, N., Grace Yi, Michelle Patch, Yasmin Alter, Jacquelyn C. Campbell, Kristin K. Gunderson, Judy T. Tang, Kiyomi Tsuyuki, and Jamila K. Stockman. “The Effect of Intimate Partner Violence and Probable Traumatic Brain Injury on Mental Health Outcomes for Black Women.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 6 (2019): 714-731. Crowe, Allison, Christine E. Murray, Patrick R. Mullen, Kristine Lundgren, Gwen Hunnicutt, and Loreen Olson. “Help-Seeking Behaviors and Intimate Partner Violence-Related Traumatic Brain Injury.” Violence and Gender, vol. 6, no. 1 (2019): 64-71. Gagnon, Kelly L., and Anne P. DePrince. “Head Injury Screening and Intimate Partner Violence: A Brief Report.” Journal of Trauma & Dissociation, vol. 18, no. 4 (2017): 635-644. Higbee, Mark, Jon Eliason, Hilary Weinberg, Jonathan Lifshitz, and Hirsch Handmaker. “Involving Police Departments in Early Awareness of Concussion Symptoms during Domestic Violence Calls.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 7 (2019): 826-837. Hunnicutt, Gwen, Christine Murray, Kristine Lundgren, Allison Crowe, and Loreen Olson. “Exploring Correlates of Probable Traumatic Brain Injury among Intimate Partner Violence Survivors.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 6 (2019): 677-694. Hux, Karen, Trish Schneider, and Keri Bennett. “Screening for traumatic brain injury.” Brain Injury, vol. 23, no. 1 (2009): 8-14. Joshi, Manisha, Kristie A. Thomas, and Susan B. Sorenson. “‘I Didn’t Know I Could Turn Colors’: Health Problems and Health Care Experiences of Women Strangled by an Intimate Partner.” Social Work in Health Care, vol. 51, no. 9 (2012): 798-814. Linton, Kristen Faye. “Interpersonal violence and traumatic brain injuries among Native Americans and women.” Brain Injury, vol. 29, no. 5 (2015): 639-643. Messing, Jill T., Kristie A. Thomas, Allison L. Ward-Lasher, and Nathan Q. Brewer. “A Comparison of Intimate Partner Violence Strangulation Between Same-Sex and Different-Sex Couples.” Journal of Interpersonal Violence, vol. 00, no. 0 (2018): 1-19. Messing, Jill T., Michelle Patch, Janet S. Wilson, Gabor D. Kelen, Jacquelyn Campbell. “Differentiating among Attempted, Completed, and Multiple Nonfatal Strangulation in Women Experiencing Intimate Partner Violence.” Women’s Health Issues, vol. 28, no. 1 (2018): 104-111. Mittal, Mona, Kathryn Resch, Corey Nichols-Hadeed, Jennifer Thompson Stone, Kelly Thevenet-Morrison, Catherine Faurot, and Catherine Cerulli. “Examining Associations between Strangulation and Depressive Symptoms in Women with Intimate Partner Violence Histories.” Violence and Victims, vol. 33, no. 6 (2019): 1072-1087. Nemeth, Julianna M., Cecilia Mengo, Emily Kulow, Alexandra Brown, and Rachel Ramirez. “Provider Perceptions and Domestic Violence (DV) Survivor Experiences of Traumatic and Anoxic-Hypoxic Brain Injury: Implications for DV Advocacy Service Provision.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 6 (2019): 744-763. Pritchard, Adam J., Amy Reckdenwald, Chelsea Nordham, and Jessie Holton. “Improving Identification of Strangulation Injuries in Domestic Violence: Pilot Data From a Researcher-Practitioner Collaboration.” Feminist Criminology, vol. 12, no. 2 (2018): 160-181. Ralston, Bridget., Jill Rable, Todd Larson, Hirsch Handmaker, and Jonathan Lifshitz. “Forensic Nursing Examination to Screen for Traumatic Brain Injury following Intimate Partner Violence.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 6 (2019): 732-743. Reckdenwald, Amy, Ketty Fernandez, and Chelsea L. Mandes. “Improving law enforcement’s response to non-fatal strangulation.” Policing: An International Journal (2019): 1-15. Reckdenwald, Amy, Chelsea Nordham, Adam Pritchard, and Brielle Francis. “Identification of Nonfatal Strangulation by 911 Dispatchers: Suggestions for Advances Toward Evidence-Based Prosecution.” Violence and Victims, vol. 32, no. 3 (2017): 506-520. Shields, Lisa B.E., Tracey S. Corey, Barbara Weakley-Jones, and Donna Stewart. “Living Victims of Strangulation.” American Journal of Forensic Medicine and Pathology, vol. 31, no. 4 (2010): 320-325. St. Ivany, Amanda, Linda Bullock, Donna Schminkey, Kristen Wells, Phyllis Sharps, and Susan Kools. “Living in Fear and Prioritizing Safety: Exploring Women’s Lives After Traumatic Brain Injury From Intimate Partner Violence.” Qualitative Health Research, vol. 28, no. 11 (2018): 1708-1718. St. Ivany, Amanda, Susan Kools, Phyllis Sharps, and Linda Bullock. “Extreme Control and Instability: Insight Into Head Injury From Intimate Partner Violence.” International Association of Forensic Nursing, vol. 14, no. 4 (2018): 198-205. St. Ivany, Amanda, and Donna Schminkey. “Rethinking Traumatic Brain Injury from Intimate Partner Violence: A Theoretical Model of the Cycle of Transmission.” Journal of Aggression, Maltreatment & Trauma, vol. 28, no. 7 (2019): 1-23. Sullivan, Karen A, and Christina Wade. “Assault-Related Mild Traumatic Brain Injury, Expectations of Injury Outcome, and the Effects of Different Perpetrators: A Vignette Study.” Applied Neuropsychology: Adult, vol. 26, no. 1 (2019): 58-64. Sullivan, Karen A, and Christina Wade. “Does the Cause of the Mild Traumatic Brain Injury Affect the Expectation of Persistent Postconcussion Symptoms and Psychological Trauma?” Journal of Clinical and Experimental Neuropsychology, vol. 39, no. 4 (2017): 408- 418. Valera, Eve M., Aihua Cao, Ofer Pasternak, Martha E. Shenton, Marek Kubicki, , Nikos Makris, and Noor Adra. “White Matter Correlates of Mild Traumatic Brain Injuries in Women Subjected to Intimate-Partner Violence: A Preliminary Study.” Journal of Neurotrauma, vol. 36 (2019): 661-668. Valera, Eve, and Aaron Kucyi. “Brain Injury in Women Experiencing Intimate Partner-Violence: Neural Mechanistic Evidence of an “Invisible” Trauma.” Brain Imaging and Behavior, vol. 11 (2017): 1664-1677. Zieman, Glynnis, Ashley Bridwell, and Javier F. Cardenas. “Traumatic Brain Injury in Domestic Violence Victims: A Retrospective Study at the Barrow Neurological Institute.” Journal of Neurotrauma, vol. 33, (2016): 1- 5. Appendix II: Nonfederal Initiatives Focused on Intimate Partner Violence and Brain Injury The following table provides a brief overview of each of the 12 initiatives we identified based on information provided by the Department of Health and Human Services, the Department of Justice, and other stakeholders. These initiatives engage in various efforts to address intimate partner violence and brain injuries, including traumatic brain injury and anoxic injuries resulting from strangulation. Our list includes those efforts identified during the course of our review and may not be exhaustive. The descriptions of initiatives are based on our review of documentation and information obtained from interviews with officials. Appendix III: Comments from the Department of Health and Human Services Appendix IV: Staff Acknowledgements and GAO Contact GAO Contact Staff Acknowledgments In addition to the contact named above, Shannon Slawter Legeer (Assistant Director), Danielle Bernstein (Analyst-in-Charge), and Ashley Dixon made key contributions to this report. Also contributing were Leia Dickerson, Kaitlin Farquharson, Drew Long, and Ethiene Salgado- Rodriguez.
Why GAO Did This Study Research has found brain injuries to be common among victims of intimate partner violence, and that such injuries are under-diagnosed and under-treated. House Report 115-952 included a provision for GAO to report on the relationship between intimate partner violence and brain injuries. GAO (1) describes efforts to provide education, screen for, or treat brain injuries resulting from intimate partner violence; and (2) examines what is known about the prevalence of brain injuries resulting from intimate partner violence, including HHS efforts to determine prevalence. GAO reviewed peer-reviewed literature, federal websites, and documentation from HHS and DOJ. GAO also interviewed officials from HHS, DOJ, and 11 non-federal stakeholders, such as domestic violence organizations. GAO identified 12 initiatives, though this list may not be exhaustive, and conducted site visits to three of them. What GAO Found According to the Centers for Disease Control and Prevention (CDC), one in three adults have experienced domestic violence, also known as intimate partner violence. Intimate partner violence includes physical violence, sexual violence, stalking, and psychological aggression. Victims of intimate partner violence may experience brain injury, resulting from blows to the head or strangulation. To address this issue, the Department of Health and Human Services (HHS) and the Department of Justice (DOJ) provide grants to state and local entities that work with victims. GAO identified 12 non-federal initiatives that provide education, screen for, or treat brain injuries resulting from intimate partner violence. All 12 developed and distributed education and training materials to domestic violence shelter staff, victims, health care providers, and others. Six of the 12 initiatives used screening tools to identify potential brain injuries among intimate partner violence victims, and two included a treatment component. Additionally, eight of the 12 initiatives received HHS or DOJ grant funding, although agency officials told us the funding had no specific requirements to address brain injuries resulting from intimate partner violence. Based on its review of the literature, as well as interviews with HHS officials and other non-federal stakeholders, GAO found that data on the overall prevalence of brain injuries resulting from intimate partner violence are limited. HHS officials acknowledged that the lack of data on the prevalence of these issues is a challenge in addressing the intersection of the issues. However, HHS does not have a plan for how it would collect better prevalence data. HHS agencies have some related efforts underway; however, the efforts are limited and generally do not examine the connection between brain injuries and intimate partner violence. Enhancing the health and well-being of Americans is critical to HHS's public health mission. As part of this mission, CDC, within HHS, uses its Public Health Approach, which includes collecting prevalence data to understand the magnitude of public health issues. With better data comes a better understanding of the overall prevalence of brain injuries resulting from intimate partner violence. This, in turn, could help ensure that federal resources are allocated to the appropriate areas and used as efficiently and effectively as possible to address this public health issue. What GAO Recommends HHS should develop and implement a plan to improve data collected on the prevalence of brain injuries resulting from intimate partner violence and use these data to inform its allocation of resources to address the issue. HHS concurred with our recommendation and is coordinating with its agencies to augment data collection.
gao_GAO-19-478
gao_GAO-19-478_0
Background VA’s Community Care Programs and Planned Consolidation VA has purchased health care services from community providers since as early as 1945. In general, veterans may be eligible for community care when they are faced with long wait times or travel long distances for appointments at VA medical facilities, or when a VA medical facility is unable to provide certain specialty care services, such as cardiology or orthopedics. In general, community care services must be authorized in advance of when veterans access the care. Currently, there are several community care programs through which VA purchases hospital care and medical services for veterans, including the Choice Program. In implementing the VA MISSION Act, VA plans to consolidate four of its community care programs for veterans under the Veterans Community Care Program, which is expected to go into effect by June 2019. (See table 1.) VA also provides health care services to veterans and other eligible beneficiaries through community providers under additional benefit programs. These benefit programs include the Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA) and the Camp Lejeune Family Member Program, among others. After implementing the VA MISSION Act, VA will continue to operate the community care programs for other eligible beneficiaries, such as CHAMPVA and others, as it has historically done. Appendix I contains more information about VA’s community care programs. Developing a Budget Estimate for VA Health Care The amount of funding VA receives to provide its health care services is determined during the annual appropriations process. In preparation for the process, VA develops an estimate of the resources needed to provide its health care services—known as its health care budget estimate—for two fiscal years. This budget estimate is one step in a complex, multistep budget formulation process, which culminates in an appropriation request for VA health care that updates the earlier, advance appropriation request for the upcoming fiscal year and an advance appropriation request for the next fiscal year in the President’s annual budget request to Congress. VA’s health care budget estimate includes the total cost of providing health care services, including direct patient costs, as well as costs associated with management, administration, and maintenance of facilities. VA uses its Enrollee Health Care Projection Model (EHCPM) to estimate the majority of resources needed to meet the expected demand for health care services, and uses other methods for the remaining services. VA uses the EHCPM to make projections 3 and 4 years into the future for budget purposes based on data from the most recent fiscal year. For example, in 2017, VA used data from fiscal year 2016 to develop its health care budget estimate for the fiscal year 2019 request and advance appropriation request for fiscal year 2020. The EHCPM’s estimates are based on three basic components: (1) the projected number of veterans who will be enrolled in VA health care, (2) the projected quantity of health care services enrollees are expected to use, and (3) the projected unit cost of providing these services. Each component is subject to a number of complex adjustments to account for the characteristics of VA health care and the veterans who access VA’s health care services. (See fig. 1.) VA uses other methods to estimate resources needed for the remaining portion of its budget estimate. This portion of the budget includes the state home per diem program, CHAMPVA, and other health care programs for veterans and other eligible beneficiaries, as well as health- care-related initiatives proposed by the Secretary of Veterans Affairs or the President. (See app. II for more information about the other methods VA uses in developing its health care budget estimate.) VHA generally starts to develop a health care budget estimate approximately 10 months before the President submits the budget to Congress, which should occur no later than the first Monday in February. The budget estimate changes during the 10-month budget formulation process, in part, due to successively higher levels of review in VA and OMB before the President’s budget request is submitted to Congress. (See table 2.) The Secretary of Veterans Affairs considers the health care budget estimate developed by VHA when assessing resource requirements among competing interests within VA, and OMB considers overall resource needs and competing priorities of other agencies when deciding the level of funding requested for VA’s health care services. OMB passes back decisions, known as a “passback,” to VA and other agencies on their budget estimate, along with funding and policy proposals to be included in the President’s budget request. VA has an opportunity to appeal the passback decisions before OMB finalizes the President’s budget request. Concurrently, VA prepares a congressional budget justification that provides details supporting the policy and funding decisions in the President’s budget request. As of fiscal year 2017, VA primarily receives funding for all health care it provides or purchases through the following appropriation accounts: Medical Services: health care services provided to eligible veterans and other beneficiaries in VA facilities and non-VA facilities, among other things. Medical Community Care: health care services that VA authorizes for veterans and other beneficiaries to receive from community providers. Medical Support and Compliance: the administration of the medical, hospital, nursing home, domiciliary, supply, and research activities authorized under VA’s health care system, among other things. Medical Facilities: the operation and maintenance of VHA’s capital infrastructure, such as the costs associated with nonrecurring maintenance, leases, utilities, facility repair, laundry services, and groundskeeping, among other things. Separate from VA’s health care appropriation accounts, the Veterans Access, Choice, and Accountability Act of 2014 provided $10 billion in funding for the Choice Program, which was implemented in early fiscal year 2015 and authorized until funds were exhausted or through August 7, 2017, whichever occurred first. However, VA received additional authority and funding to maintain the Choice Program through June 6, 2019, when the new Veterans Community Care Program is expected to go into effect. VA expects that the new Veterans Community Care Program will be primarily funded through the Medical Community Care appropriation account. VA Obligations for and Number of Veterans Authorized to Use Community Care Have Grown from Fiscal Year 2014 through Fiscal Year 2018 VA’s Obligations for Community Care Increased by Over 80 Percent from Fiscal Years 2014 through 2018, and VA Estimates Obligations Will Grow an Additional 20 Percent through 2021 Our analysis of VA budget justification data shows that from fiscal year 2014 through fiscal year 2018, the total amount VA actually obligated for community care increased 82 percent, from $8.2 billion to $14.9 billion. Since VA implemented the Choice Program in fiscal year 2015, the share of VA’s obligations for community care relative to VA’s total obligations for health care services increased through fiscal year 2018, from about 14 to 19 percent of VA’s total obligations for health care services. By fiscal year 2021, VA estimates that the total amount obligated for community care will increase to $17.8 billion, an increase of about 20 percent from the $14.9 billion in actual obligations for fiscal year 2018. (See fig. 2.) As figure 2 shows, the largest increase in actual obligations for community care occurred from fiscal years 2015 through 2016, when they increased by $3.4 billion, from $8.9 billion to $12.3 billion. According to VA officials, this increase in obligations during this period reflected veterans’ expanded use of community care through the Choice Program, as more providers participated in the provider networks established by third-party administrators or entered into provider agreements with VA facilities. (Fig. 3 provides information on VA’s obligations for community care by the Choice Program and by other community care programs.) The increase in actual obligations for community care from fiscal year 2016 through fiscal year 2017 was also largely due to expanded use of community care through the Choice Program. VA officials attributed this increase to efforts to obligate as much of the available Choice Program funding as possible before the anticipated end of the Choice Program in August of 2017. From fiscal years 2017 through 2018, obligations for community care continued to increase, but the increase was partially due to greater use of other community care programs, according to VA officials. From fiscal years 2014 through 2018, the increases in total actual obligations for VA community care were driven largely by increases in obligations for outpatient and inpatient services. Over this time period, VA’s actual obligations for outpatient services increased by $2 billion, from $2.3 billion to $4.3 billion, and actual obligations for inpatient services increased by $818 million, from $1.8 billion to $2.7 billion. Figure 4 illustrates how outpatient and inpatient services accounted for most of VA’s total community care obligations for fiscal year 2018. VA estimated that from fiscal years 2019 through 2021, obligations for community care will increase to $17.8 billion, which VA officials said are attributable to the new eligibility criteria under the VA MISSION Act. The authority for the Choice Program ends June 6, 2019, after which the new Veterans Community Care Program—which consolidates VA’s community care programs under the VA MISSION Act—will be expected to begin. For comparison purposes, the largest increase in obligations for services provided at VA medical facilities is estimated to occur between fiscal years 2020 and 2021. VA officials said this increase is attributable, in part, to efforts related to hiring and telehealth in response to the eligibility criteria under the VA MISSION Act. The Number of Veterans Authorized to Use Community Care Increased about 40 Percent from Fiscal Years 2014 through 2018 Our analysis of VA data on authorizations for community care shows that the number of veterans authorized to use community care increased 41 percent from fiscal years 2014 through 2018. (See fig. 5.) The approximately 1.8 million veterans authorized to use community care in 2018 represented about 30 percent of all veterans accessing VA health care services that year (approximately 6.2 million veterans). By fiscal year 2021, VA officials told us that they estimate that at least 1.8 million veterans will still use community care. Our analysis of VA data also shows that after being authorized for care, veterans’ utilization of certain community care services increased from fiscal years 2014 through 2018. Over this time period, a number of outpatient services experienced increases of more than 200 percent in utilization, especially chiropractic visits (418 percent, from 143,000 to 743,000 visits), physical therapy visits (252 percent, from 857,000 to 3 million visits), and non-mental health related office visits (243 percent, from 651,000 to 2.2 million visits). In comparison, our analysis found relatively smaller increases in veteran utilization for certain inpatient services. For example, the utilization for surgical inpatient stays increased about 39 percent—from 253,000 to 352,000 bed days. VA Updated Its Projection Model to Develop Most of Its Community Care Budget Estimate; Subsequent Changes Reflect More Current Information and Other Factors VA first developed a separate budget estimate for community care to inform the President’s fiscal year 2017 budget request. Beginning with the President’s fiscal year 2018 budget request, VA updated its EHCPM to develop over 75 percent of its community care budget estimate and used other methods to develop the remainder. Subsequent changes were made to the community care budget estimates developed by the EHCPM for fiscal years 2018 and 2019 through successively higher levels of review in VA and OMB. VA First Developed a Separate Budget Estimate for Community Care as Part of the President’s Fiscal Year 2017 Budget Request for VA VA first developed a separate budget estimate of the resources it would need for community care—as distinct from the care provided in VA medical facilities—in order to inform the President’s fiscal year 2017 budget request for VA. Prior to this fiscal year 2017 budget request, VA developed a single budget estimate of the resources needed to provide all VA health care services, regardless of whether these services were purchased from community providers or delivered in VA medical facilities, because all these services were to be funded through the same appropriation account. According to VA officials, at the time a separate community care appropriation account and budget estimate were unnecessary, because community care accounted for a relatively small portion of VA’s overall health care budget. However, once the medical community care appropriation account was established in fiscal year 2017, VA began developing a separate budget estimate for community care, as required by law. To develop its first estimate of the resources needed for community care for fiscal year 2017, VA made adjustments to existing estimates for total demand for care—both in VA medical facilities and community care combined—developed by the EHCPM. At the time, VA used the EHCPM to estimate the resources needed to provide VA health care services to veterans, including inpatient, outpatient, and long-term care. However, the EHCPM did not make separate estimates for community care and care provided at VA facilities; according to VA officials, VA adjusted the EHCPM estimates by assuming that for each service, the share of total utilization and costs devoted to community care would be the same as they had been in the most recently completed fiscal year. In addition, after this adjustment, VA made additional changes to the community care budget estimate, which resulted in a net increase of $2.5 billion. Nearly all of this increase reflected an anticipated impact of the expanded access under the Choice Program, according to VA officials. Overall, this approach accounted for about 75 percent of the $12.3 billion community care budget estimate that informed the President’s budget request for fiscal year 2017. To develop the remaining portion of its community care budget estimate, VA used methods other than the EHCPM that, according to VA officials, were used historically to develop estimates of the resources needed for the state home per diem program and benefit programs. For example, VA develops budget estimates for certain services under the state home per diem program by creating projections of the amount of care to be provided using information about the size and demographic characteristics of the enrolled veteran population and projections of the unit cost of providing one day of care using recent cost experience. According to VA officials, VA was able to continue using these other methods, because the services under these programs have been provided through community providers and not VA medical facilities. While methods for each program vary, in general, these methods are based on each program’s historical utilization and costs. (See app. II for additional information on the methods VA uses to develop the budget estimates for each of these community care programs.) Beginning with the President’s Fiscal Year 2018 Budget Request, VA Updated Its Projection Model to Develop over 75 Percent of Its Community Care Budget Estimate Beginning with the President’s fiscal year 2018 budget request, VA updated its EHCPM directly to estimate most of the resources needed to purchase community care for veterans. Specifically, VA updated the EHCPM to estimate the amount of resources needed to purchase a set of more than 40 community care services that have accounted for over 75 percent of VA’s total community care budget estimates of $12.6 billion for fiscal year 2018 and $12.4 billion for fiscal year 2019. These health care services were grouped into seven service types and include outpatient care, inpatient care, and long-term care. (See app. III for a list of the health care services). Of these services, outpatient services typically accounted for the largest share of VA’s community care budget estimate. For the remainder of community care services—including services provided under the state home per diem program and benefit programs— VA did not use the EHCPM and instead continued to use the other methods it has historically used to develop budget estimates for these services. (See fig. 6.) VA made several changes to the EHCPM to develop most of its community care budget estimate. Historically, the EHCPM estimated resources needed to meet the total expected demand for VA health care—a combination of care provided in VA medical facilities and through community care programs. VA updated the EHCPM to determine the proportion of demand met by community care by projecting enrolled veterans’ expected utilization of community care and the expected costs of purchasing these services. In what follows, we describe five major changes made to the EHCPM allowing VA to estimate the budgetary resources needed for community care. 1. Reliance on community care services. The EHCPM has historically accounted for the extent to which enrolled veterans would be projected to obtain health care services through the VA as opposed to other health care programs or insurers—referred to as reliance on VA health care. VA updated the EHCPM so that it can further account for the extent to which enrolled veterans would be expected to use VA’s community care programs as opposed to using care in VA’s medical facilities. Each year, the EHCPM determines reliance on VA community care based on a combination of historical experience—or the extent to which community care was used in prior fiscal years— and on the projected impact of new VA policies and operational guidance. For example, for the fiscal year 2019 budget estimates, the EHCPM projected reliance on VA care to be about 38 percent, of which 14 percent would be met through community care. Thus, the EHCPM projected reliance on VA’s community care programs to be about 5.3 percent for all care enrolled veterans are projected to use in fiscal year 2019. 2. Accounting for difference in community providers’ efficiency delivering inpatient services. VA also updated the EHCPM so that community care utilization projections account for the fact that veterans receiving inpatient care through community providers generally have relatively shorter lengths of inpatient stays compared with veterans receiving care at VA medical facilities. According to officials from VA and its actuarial consultant, community providers on average have historically performed better than VA providers on national benchmarks measuring how well providers manage the length of inpatient stays, while not affecting quality of care. To account for this difference, VA uses an adjustment factor when projecting utilization of inpatient services based on potentially avoidable days of care for community providers. 3. Comparing projected utilization with actual utilization for community care services. VA developed an adjustment factor for the EHCPM’s utilization estimates to account specifically for the differences between projected utilization and actual utilization of community care for the most recently completed fiscal year of data. According to VA officials, the difference typically reflects utilization behavior among providers or patients that are difficult to estimate based solely on historical data—such as changes in local practice patterns (e.g., providers choosing to use magnetic resonance imaging versus x-rays). To account for this behavior, VA compares projected and actual utilization and creates an “actual-to-expected” adjustment factor for each health care service to account for the difference. 4. Projecting unit costs for community care services. VA updated the EHCPM so that it could estimate what are known as the unit costs of purchasing community care services for veterans. In general, the unit cost of a community care service comprises the payment made to the provider (known as direct patient costs), as well as the indirect costs associated with administration and overhead. Indirect costs include (1) the fees paid to the contractors for administrative responsibilities for the Choice Program, (2) VA billing and processing costs and care coordination costs associated community care programs, and (3) certain costs associated with the VA Central Office that support community care (e.g., the salaries for officials from the Office of Community Care and other VA Central Office officials). 5. Accounting for community care service complexity and inflation. VA made other changes to the EHCPM’s unit cost projections for community care. For example, VA updated the EHCPM so that it accounts for costs associated with changes in the complexity—that is, the level of resources required to deliver—of health care services VA purchases from community providers. Officials from VA and its actuarial consultant noted that more complex services require relatively more resources to deliver, such as more expensive equipment (e.g., magnetic resonance imaging); more provider time; or higher-cost providers, such as surgeons. Officials anticipate that most services that VA purchases in the community will increase in complexity, leading to higher projected unit-costs for community care. VA also updated the EHCPM so that its unit cost estimates for community care account for inflation in the cost of labor and equipment. VA’s Community Care Budget Estimates Projected by the Model for Fiscal Years 2018 and 2019 Were Subsequently Changed to Reflect More Current Information, Among Other Factors VA’s community care budget estimates are reviewed at successively higher levels at VA and OMB to inform the President’s budget request for VA. VA identified several changes made during the review process to its estimates projected by the EHCPM for fiscal years 2018 and 2019; these changes were due to the availability of more current information related to utilization and costs, among other factors. For fiscal year 2018, changes resulted in a budget request for VA community care in the President’s budget request that was approximately $1 billion lower than VA’s original EHCPM budget estimate of $10.7 billion. These changes included the following: A $996 million decrease reflecting the availability of more current information showing that an anticipated increase in utilization due to the Choice Program was too high. A $600 million decrease reflecting the availability of more current information showing that overhead costs initially allocated to community care in the data used in the EHCPM were too high. A $180 million decrease accounting for VA’s implementation of a new law that reduces VHA’s use of community care for examinations determining veterans’ disability ratings. A $500 million increase accounting for a court ruling that affected veteran eligibility for reimbursement of emergency community care, which was expected to increase utilization. A $250 million increase reflecting the availability of more current information that indicated administrative costs for the Choice Program in the data used in the EHCPM were too low. For fiscal year 2019, changes resulted in a budget request for VA community care in the President’s budget request that was nearly $1 billion higher than VA’s original EHCPM budget estimate of $8.6 billion. These changes included the following: A $1.7 billion increase reflecting more current information indicating that community care administrative costs and the utilization levels in the data used in the EHCPM were too low. A $1 billion increase accounting for a delay in the timing of the implementation of community care network contracts. According to VA officials, this resulted in the continued use of reimbursement rates in community care that were higher than Medicare reimbursement rates. A $1.8 billion decrease that reflected VA’s implementation of a new policy that changed the timing of community care obligations from when a veteran is authorized to use community care to the when a claim for actual services is paid. VA’s Actual Obligations for Community Care in Fiscal Years 2017 and 2018 Were Higher than Estimated and Included Additional Funding Received for the Choice Program VA’s Actual Obligations for Community Care in Fiscal Years 2017 and 2018 Were $1.2 Billion and $2.2 Billion Higher than Estimated, Respectively Our analysis of data included in VA’s budget justifications shows that in fiscal years 2017 and 2018, VA obligated $1.2 billion and $2.2 billion more for community care than originally estimated at the time of the President’s budget requests for those years. In both years, VA’s actual obligations for both the Choice Program and other community care programs were higher than estimated. (See table 3.) According to VA officials, the higher-than-estimated obligations for the Choice Program for fiscal year 2017 were driven, in part, due to changes in Choice Program policies and a large increase in the cost per authorization for care. In the case of other community care programs, VA officials told us that the higher-than-estimated obligations for both fiscal years 2017 and 2018 were driven, in part, by local practice patterns (e.g., providers choosing to use magnetic resonance imaging versus x-rays) and the capacity of VA medical facilities to provide services. As discussed later in this report, VA also received and reallocated additional funding to purchase community care in fiscal years 2017 and 2018, which contributed to actual obligations being higher-than-estimated obligations. Our analysis of VA’s obligations by service type shows that in fiscal year 2017, VA’s higher-than-estimated obligations for community care were primarily for outpatient and inpatient services, as shown in table 4. In fiscal year 2018, the higher-than-estimated obligations for community care were primarily for outpatient services, while there was an overall decrease in obligations for inpatient services. (See table 5.) Additionally, for some service types, VA’s actual obligations were lower than estimated in fiscal years 2017 and 2018. VA’s Higher-Than- Estimated Obligations for Community Care Included Additional Funding VA Received for the Choice Program Outside of the Annual Appropriations Process To obligate $13.6 billion for community care in fiscal year 2017 and $14.9 billion in fiscal year 2018—amounts that were $1.2 billion and $2.2 billion higher, respectively, than what VA originally estimated for its budget request, and what VA received in its annual appropriation—VA requested and received additional Choice Program funding outside of the annual appropriations process. VA also reallocated funding from other sources, including unobligated funding from a prior fiscal year and collections, to pay for the other community care programs. Specifically, the $13.6 billion and $14.9 billion VA obligated for community care in fiscal years 2017 and 2018, respectively, came from the following sources: Choice Program. For both fiscal years, VA obligated from its remaining funding and prior-year recoveries from the previous fiscal years, and requested and received additional funding three times outside of the annual appropriations process. (Table 6 below summarizes the time frames during which VA requested and received additional appropriations for the Choice Program outside of the annual appropriations process for fiscal years 2017 and 2018.) Other community care programs. For both fiscal years, VA obligated from its annual appropriation and transferred a portion of its overall collections from its Medical Care Collections Fund to the medical community care account. In addition, for fiscal year 2018, VA used unobligated funding and prior-year recoveries from fiscal year 2017. Agency Comments We provided a draft of this product to VA and OMB for comment. VA provided technical comments, which we incorporated as appropriate. OMB had no comments. We are sending copies of this report to the Secretary of Veterans Affairs, the Director of the Office of Management and Budget, appropriate congressional committees, and other interested parties. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or silass@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: The Department of Veterans Affairs’ Community Care Programs for Veterans and Other Eligible Beneficiaries While the majority of veterans utilizing Department of Veterans Affairs’ (VA) health care services receive care in VA-operated medical facilities, veterans may also obtain services from non-VA providers in the community—known as community care—through one of several community care programs aimed at helping to ensure that veterans receive timely and accessible care. In implementing the VA MISSION Act, VA plans to consolidate four of its community care programs for veterans—dialysis contracts, individually authorized care, the Patient- Centered Community Care Program, and the Veterans Choice Program— under the Veterans Community Care Program, which is expected to go into effect by June 2019. In addition, VA has several other community care programs that serve veterans and programs that provide health care services to other eligible beneficiaries, including a veteran’s spouse or dependent child. Community Care Programs for Veterans that VA Plans to Consolidate Dialysis contracts. When dialysis services—a life-saving medical procedure for patients with permanent kidney failure—are not feasibly available at VA medical facilities, veterans may be referred to one of VA’s contracted dialysis providers, and veterans may receive dialysis at local clinics on an outpatient basis, or at home (if the contractors offer home- based dialysis services). Individually authorized care. When a veteran cannot access a particular specialty care service from a VA medical facility—either because the service is not offered, the veteran would have to wait too long for an appointment, or the veteran would have to travel a long distance to a VA medical facility—VA medical facility staff may request an individual authorization for the veteran to obtain the service from a community provider who is willing to accept VA payment. Patient-Centered Community Care. VA contracted with two third-party administrators to develop regional networks of community providers of specialty care, mental health care, limited emergency care, and maternity and limited newborn care when such care is not feasibly available from a VA medical facility. To be eligible to obtain care from Patient-Centered Community Care providers, veterans must meet the same criteria that are required for individually authorized care. Veterans Choice Program. VA modified its Patient-Centered Community Care contracts with the two third-party administrators to implement the Veterans Choice Program. This program allows eligible veterans to obtain health care services from community providers if the veteran meets certain criteria, including when a veteran cannot receive care within 30 days from the veteran’s or physician’s preferred date, or face an unusual or excessive burden in traveling to a VA medical center. Other Community Care Programs for Veterans Agreements with federal partners and academic affiliates. When services are not available at VA medical facilities, VA may obtain specialty, inpatient, and outpatient health care services for veterans through different types of sharing agreements—those with other federal facilities (such as those operated by the Department of Defense and the Indian Health Service), those with Tribal Health Programs, and those with university-affiliated hospitals, medical schools, and practice groups (known as academic affiliates). Emergency care. When emergency community care is not preauthorized, VA may reimburse community providers for emergency care for eligible veterans for a condition related to a service-connected disability, and for eligible veterans for a condition not related to a service- connected disability. Foreign Medical Program. The Foreign Medical Program is VA’s health care benefits program for eligible veterans who are residing or traveling abroad and have a service-connected disability. State Home Per Diem Program. Under the State Home Per Diem Program, states provide care for eligible veterans in three different types of programs: nursing home, domiciliary, and adult day health care. Community Care Programs for Other Beneficiaries Camp Lejeune Family Member Program. The Camp Lejeune Family Member Program is for family members of veterans that lived or served at U.S. Marine Corps Base Camp Lejeune, North Carolina, for no fewer than 30 days between January 1, 1957, and December 31, 1987, and were potentially exposed to drinking water contaminated with industrial solvents, benzene, and other chemicals. The program provides health care to veterans who served on active duty at Camp Lejeune and to reimburse eligible Camp Lejeune family members for health care costs related to one or more of 15 specified illnesses or medical conditions specified in law. Children of Women Vietnam Veterans Health Care Benefits Program. This program provides health care benefits to female Vietnam veterans’ birth children who the Veterans Benefits Administration has determined to have a covered birth defect. This program is not a comprehensive health care plan and only covers those services necessary for the treatment of a covered birth defect and associated medical conditions. Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA). CHAMPVA is a comprehensive health care program that provides health care coverage for spouses, children and primary caregivers of veterans who are permanently and totally disabled from a service-connected disability. CHAMPVA functions similarly to traditional health insurance, with most care in the program delivered using non-VA community providers. Spina Bifida Health Care Benefits Program. This program provides health care benefits to certain Korea and Vietnam veterans’ birth children who have been diagnosed with spina bifida. Appendix II: Budget Formulation Process for the State Home Per Diem Program and Non- Veteran Community Care Programs The Department of Veterans Affairs (VA) and its actuarial consultant use the Enrollee Health Care Projection Model to develop most of the department’s estimate of the resources needed to meet the expected demand for VA’s health care services. VA uses other methods to estimate the remaining resources needed. This remaining portion includes community care programs for veterans and other eligible beneficiaries, including the State Home Per Diem Program and the Civilian Health and Medical Program of the Department of Veterans Affairs (CHAMPVA). State Home Per Diem Program. This program pays per diem for state- provided care for eligible veterans in three different types of programs: domiciliary, nursing home, and adult day health care. For state home domiciliary and nursing care, categorized as institutional care, VA creates budget projections based on historical funding data. For state home adult day health care, categorized as non-institutional care, VA’s budget estimates are based on projections of the amount of care provided— which is known as workload—and the unit cost of providing a day of this care. VA projects the demand for non-institutional care services using information about the size and demographic characteristics of the enrolled veteran population. VA projects unit cost for non-institutional care services by calculating unit-cost increases observed from recent experience and then using this information to project future unit costs. VA multiplies the workload estimates, unit-cost estimates, and the number of days in the fiscal year to develop an estimate of the amount of resources needed for non-institutional care. CHAMPVA. CHAMPVA provides health care coverage for spouses and children of veterans who are permanently and totally disabled from a service-connected disability. CHAMPVA functions similarly to traditional health insurance—most care within CHAMPVA is delivered using non-VA community providers. Therefore, developing estimates of the resources needed for CHAMPVA requires factoring in utilization patterns and cost inflation that are generally outside of VA’s control. Budget estimates for CHAMPVA are developed using a formula that computes the predicted number of users and costs per-member per-year. VA works with its actuarial consultant to generate projections of CHAMPVA users that incorporate changes related to the population of disabled veterans and projections of expected increases and decreases in the CHAMPVA- eligible population. In addition, the actuarial consultant projects the costs per-member per-year, which is calculated by dividing the most current fiscal year data on total CHAMPVA expenditures by the number of actual users. Trends are then incorporated to predict the future costs per- member per-year, which is multiplied by projections of the number of CHAMPVA users to develop CHAMPVA budget estimates. Appendix III: Health Care Services included in the Enrollee Health Care Projection Model for Fiscal Year 2019 Using its Enrollee Health Care Projection Model (EHCPM), the Department of Veterans Affairs (VA) developed estimates for 79 health care services—available in VA medical facilities or through community care—for the fiscal year 2019 President’s budget request. As shown in table 7, VA developed separate estimates for the 43 services that were available through community care. Some of these 43 services were only available through community care. These services were primarily long- term care, including nursing home care provided at community nursing homes, home hospice care, home respite care, homemaker or home health aid programs, and purchased skilled nursing care. Appendix IV: Community Care Data Sources in the Department of Veterans Affairs’ Enrollee Health Care Projection Model The Department of Veterans Affairs (VA) and its actuarial consultant use the Enrollee Health Care Projection Model (EHCPM) to develop most of the department’s budget estimate to meet the expected demand for VA’s health care services. This estimate includes the services that VA purchases from non-VA community providers through its various community care programs, including the Veterans Choice Program (Choice Program). Based on our interviews with various VA officials, VA’s Office of Enrollment and Forecasting provided utilization and cost data from fiscal year 2016 community care claims from four different sources for use in the 2017 EHCPM, which was used to project the fiscal year 2019 budget estimate. (See fig. 7.) Specifically, the Office of Enrollment and Forecasting—which is responsible for compiling the claims data used in the EHCPM—obtained community care claims data, including Choice Program claims, from VA’s Fee Basis Claims System. In addition, the Office of Enrollment and Forecasting worked with VA’s Allocation Resource Center to gather additional utilization and cost data from Choice Program claims processed outside the Fee Basis Claims System, and other data needed for the 2017 EHCPM. Specifically, the Allocation Resource Center compiled claims data for those Choice Program claims paid through expedited payments. The Allocation Resource Center also pulled data on dual eligible veterans, from the Department of Defense’s Medical Data Repository, and indirect costs associated community care claims (for example, costs associated with care coordination or claims processing) from VA’s Managerial Cost Accounting system. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Rashmi Agarwal (Assistant Director), Aaron Holling (Analyst-in-Charge), Chad Clady, and Kate Tussey made key contributions to this report. Also contributing were Krister Friday, Jacquelyn Hamilton, and Muriel Brown. Related GAO Products Veterans Choice Program: Further Improvements Needed to Help Ensure Timely Payments to Community Providers. GAO-18-671. Washington, D.C.: September 28, 2018. Veterans Choice Program: Improvements Needed to Address Access- Related Challenges as VA Plans Consolidation of its Community Care Programs. GAO-18-281. Washington, D.C.: June 4, 2018. VA’s Health Care Budget: In Response to a Projected Funding Gap in Fiscal Year 2015, VA Has Made Efforts to Better Manage Future Budgets. GAO-16-584. Washington, D.C.: June 3, 2016. Veterans’ Health Care: Proper Plan Needed to Modernize System for Paying Community Providers. GAO-16-353. Washington, D.C.: May 11, 2016. Veterans’ Health Care Budget: Improvements Made, but Additional Actions Needed to Address Problems Related to Estimates Supporting President’s Request. GAO-13-715. Washington, D.C.: August 8, 2013. Veterans’ Health Care: Improvements Needed to Ensure That Budget Estimates Are Reliable and That Spending for Facility Maintenance Is Consistent with Priorities. GAO-13-220. Washington, D.C.: February 22, 2013. Veterans’ Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification. GAO-12-908. Washington, D.C.: September 18, 2012. Veterans’ Health Care Budget: Transparency and Reliability of Some Estimates Supporting President’s Request Could Be Improved. GAO-12-689. Washington, D.C.: June 11, 2012. VA Health Care: Estimates of Available Budget Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 30, 2012. VA Health Care: Methodology for Estimating and Process for Tracking Savings Need Improvement. GAO-12-305. Washington, D.C.: February 27, 2012. Veterans’ Health Care Budget Estimate: Changes Were Made in Developing the President’s Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President’s Budget Request. GAO-11-205. Washington, D.C.: January 31, 2011. VA Health Care: Challenges in Budget Formulation and Issues Surrounding the Proposal for Advance Appropriations. GAO-09-664T. Washington, D.C.: April 29, 2009. VA Health Care: Challenges in Budget Formulation and Execution. GAO-09-459T. Washington, D.C.: March 12, 2009. VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement. GAO-09-145. Washington, D.C.: January 23, 2009. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006.
Why GAO Did This Study VA continues to focus on the use of community care to address challenges with veterans' access to health care services at VA medical facilities. In fiscal year 2019, VA plans to consolidate the Veterans Choice Program and several other community care programs under a single new Veterans Community Care Program. GAO and others have previously reported on past challenges VA has faced regarding the reliability, transparency, and consistency of its budget estimates for health care. GAO was asked to review VA's use of community care and efforts to develop budget estimates for this care. This report describes (1) trends in obligations for and utilization of VA's community care programs since fiscal year 2014, (2) how VA develops its community care budget estimate and any subsequent changes made to this estimate, and (3) how VA's actual obligations for community care compared with estimated obligations for fiscal years 2017 and 2018. GAO reviewed actual obligation and utilization data for fiscal years 2014 through 2018, as well as estimated obligations for fiscal years 2019 through 2021. GAO also reviewed available VA documentation on the methods and data used to develop VA's community care budget estimate that informed the President's budget request for fiscal years 2017 through 2019. GAO also interviewed VA officials and contractors responsible for developing these estimates, and OMB staff responsible for the federal budget. VA and OMB reviewed a draft of this report. VA's technical comments were incorporated as appropriate. What GAO Found To help ensure that veterans are provided timely and accessible health care services, the Department of Veterans Affairs (VA) may purchase care from non-VA providers, known as community care. VA obligated $14.9 billion for community care in fiscal year 2018, an increase of $6.7 billion (about 82 percent) since fiscal year 2014. The number of veterans authorized to use community care increased from 1.3 million to 1.8 million during this period. By fiscal year 2021, VA estimated obligations to increase to $17.8 billion, and officials estimate at least 1.8 million veterans will continue to use this care. Note: VA estimated obligations for fiscal year 2019 to reflect $1.8 billion in anticipated savings as a result of a VA policy change regarding the timing of certain community care obligations. VA uses a projection model to estimate the majority of resources needed to provide health care services. Beginning with the President's fiscal year 2018 budget request, VA updated its model to estimate the resources needed to purchase over 40 community care services accounting for over 75 percent of VA's community care budget estimate. These services include outpatient and inpatient care, among others. For the remainder of its community care budget estimate, which includes nursing care in state-operated homes, VA uses other methods based on historical utilization. VA's budget estimate is successively reviewed at VA and the Office of Management and Budget (OMB) to inform the President's budget request. VA identified several changes made during the review process to its budget estimate for fiscal years 2018 and 2019 to reflect more current information related to utilization and costs, among other factors. VA's actual obligations for community care for fiscal years 2017 and 2018 were $1.2 billion and $2.2 billion higher, respectively, than originally estimated. According to VA officials, this occurred for several reasons, including policy changes and increased costs for the Veterans Choice Program. To support higher obligations, VA requested and received additional funding for the Veterans Choice Program outside the annual appropriations process and used other funding sources, such as unobligated amounts from prior fiscal years.
gao_GAO-20-156
gao_GAO-20-156_0
Background Federal trust funds are an accounting mechanism used to link dedicated collections with their expenditure for a specific purpose or program (see textbox). Earmarked or Dedicated Collections Our budget glossary (GAO-05-734SP) includes two definitions of earmarking: 1. Dedicating collections by law for a specific amount for particular purposes by means of legislative language. Our 2001 report on trust funds (GAO-01-199SP) used the term “earmarked receipts” in accordance with the first definition. We use the term “dedicated collections” instead to avoid confusion between the two definitions. One of the earliest trust funds established was the Civil Service Retirement and Disability Fund, set up in 1920. In the federal budget, the meaning of the term “trust fund” differs significantly from its private sector usage. In the case of federal trust funds, the federal government owns the assets of federal trust funds, does not have a fiduciary responsibility to trust beneficiaries, and can raise or lower future trust fund collections and payments or change the purposes for which the collections are used by changing existing laws. Designation as a trust fund does not in and of itself impose a greater commitment on the government to carry out program activities than it has to carry out other government activities. It can, however, indicate the government’s intent to restrict the use of those funds to the specified purpose and—especially for a program funded in whole or in part by its beneficiaries—may influence debates about program changes. OMB and Treasury determine budgetary designation as a trust fund when a law both dedicates collections to a program and identifies the account as a “trust fund.” Trust funds, however, are not the only way dedicated collections are accounted for in the federal budget. Special funds and public enterprise funds also link dedicated collections with their expenditure for a specific purpose or program and are analogous to non- revolving and revolving trust funds, respectively (see figure 1). For the purpose of this report, we examine budget accounts designated as “trust funds” by OMB and Treasury and those that link dedicated collections with their expenditure. There are two other fund types in the federal budget that we did not include: general fund accounts, which hold all federal money not allocated by law to any other fund account, and intragovernmental fund accounts, which facilitate financing transactions primarily within and between federal agencies. The four fund types included in our definition of trust funds and other dedicated funds are: Non-revolving Trust Fund. An account designated as a “trust fund” by law that is credited with dedicated collections, which can often, but not always, be used without further appropriation action. For example, the Federal Hospital Insurance (HI) Trust Fund, also known as Part A of Medicare, is financed primarily through payroll taxes levied on workers and their employers and finances health care services related to stays in hospitals, skilled nursing facilities, and hospices for eligible beneficiaries. Special Fund. Analogous to a non-revolving trust fund but not classified as a trust fund in name. For example, the Universal Service Fund subsidizes telecommunication carriers that provide telecommunications services to all consumers, including low-income consumers, schools and libraries, and those who live in rural or high- cost areas. Revolving Trust Fund. An account designated as a “trust fund” by law that is credited with collections that are generally available for obligation without further appropriation action to carry out a cycle of businesslike operations in accordance with statute. For example, the Employees Health Benefits Fund collects health insurance premiums from federal employees, annuitants, and their employing agencies and disburses payments to private insurers who participate in the Federal Employees Health Benefits program. Public Enterprise Fund. Analogous to a revolving trust fund but not classified as a trust fund in name. A public enterprise fund is a type of revolving fund that carries out a cycle of businesslike operations, mainly with the public, in which it charges for the sale of products or services and uses the proceeds to finance its spending, usually without requirement for annual appropriations. The Postal Service Fund of the United States Postal Service is an example of this type of fund. Fund Balances Trust funds and dedicated funds have their own dedicated collections and the ability to retain accumulated balances. From the perspective of the trust fund or other dedicated fund, the accumulated balances represent the value of past taxes, fees, and the other income received by the fund in excess of past spending by the fund. The accumulated balances are not cash. Most money collected and disbursed by the federal government is held in the General Fund of the U.S. Government (General Fund). The dedicated taxes and fees collected from the public are deposited in the General Fund and the General Fund disburses the fund’s benefit and other payments to the public. When the General Fund receives the cash, the trust fund or other dedicated fund records an asset for these collections and the General Fund records a liability to the fund, which essentially means the trust fund has “lent” money to the General Fund. As cash is disbursed, these asset and liability accounts are reduced. From the government-wide perspective, the trust fund or dedicated fund asset and General Fund liability accounts eliminate with each other in consolidation. Some trust funds and other dedicated funds have the legal authority to invest their balances, most of which are held in U.S. Treasury securities. The value of the securities held is recorded as “debt held by government accounts” and represents debt owed by one part of the government to another (i.e., intragovernmental debt). In many ways, the special U.S. Treasury securities held by government accounts are indistinguishable from the marketable government debt sold to the public. A maturity date is set, interest is accrued at established market rates, and the securities count as part of the total federal debt. Generally, these securities are not traded in the financial markets and are able to be redeemed on demand by the government account. The interest they earn is credited to the fund accounts in the form of additional Treasury securities or is used to pay current expenses or benefits. Interest earned by government accounts on their Treasury securities is an internal transaction, made between two accounts within the federal government, and constitutes an expense for Treasury. Treasury must pay back the debt held by government accounts when these accounts need to redeem their securities to be able to make their expenditures. When this happens, Treasury must obtain cash to finance the government’s spending either through increasing taxes, cutting spending, or increasing borrowing from the public. Types of Budget Authority Entitlement authority is another way to classify budget authority, but OMB’s budget data do not include that classification. Discretionary spending refers to budget authority that is provided in and controlled by appropriations acts. Mandatory spending, also known as direct spending, refers to budget authority provided in laws other than appropriations acts and the outlays that result from such budget authority. Entitlement authority is the authority to make payments to any person or government if, under the provisions of the law, the U.S. government is legally required to make the payments to persons or governments that meet the requirements. Generally, entitlement authority is a type of mandatory spending. Applicability of Budget Control Mechanisms The classification of the budget authority within a trust fund or other dedicated fund as mandatory or discretionary determines how budget control mechanisms apply. By itself, designation as a trust fund does not determine whether spending is controlled through the annual appropriations process or what limitations apply. Trust funds and dedicated funds are subject to various enforcement mechanisms intended to control revenues, spending, and deficits. The Balanced Budget and Emergency Deficit Control Act of 1985 (BBEDCA) first established sequestration, which is the cancellation of budgetary resources under a presidential order. The act set deficit reduction targets for the federal government and established sequestration procedures to enforce those targets. The Budget Control Act of 2011 amended BBEDCA and revived this budgetary enforcement mechanism by reinstating budget limits (also known as “caps”) to encourage agreement on deficit reduction legislation or, in the event that such agreement was not reached, to automatically reduce spending so that an equivalent budgetary goal would be achieved. Appropriations from trust funds and other dedicated funds designated as discretionary count toward these limits. The Statutory Pay-As-You-Go Act of 2010 (PAYGO) specifies a second type of sequestration triggered under certain conditions. The act establishes a permanent budget enforcement mechanism intended to prevent enactment of mandatory spending and revenue legislation that would increase the federal deficit. The act requires OMB to track costs and savings associated with enacted legislation and to determine at the end of each congressional session if net total costs exceed net total savings. If the costs exceed the savings, a separate sequestration will be triggered. Consequently, the same mandatory accounts that are subject to sequestration under BBEDCA could incur further reductions if a secondary PAYGO sequestration is triggered. PAYGO does not control the growth in spending that results from previously enacted laws, nor does it control discretionary spending. Federal Trust Funds and Other Dedicated Funds Were a Large and Growing Part of the Budget from Fiscal Year 2014 to 2018 Every Major Department Has At Least Two Trust Funds or Other Dedicated Funds Hundreds of programs across the federal government are supported in whole or in part by a trust fund or other dedicated fund. Our analysis of OMB’s budget data shows 398 active federal trust funds and other dedicated funds in fiscal year 2018. Non-revolving trust funds and special funds make up the greatest number of these types of accounts and also hold the greatest total balances. See table 2. Our analysis of another government-wide source, Treasury’s Combined Statement, records 647 trust and other dedicated fund accounts in fiscal year 2018. This count is higher because Treasury includes accounts with smaller balances and does not combine groups of related accounts. Of the accounts in Treasury’s Combined Statement, 150 have balances that are below $500,000 and would fall below OMB’s rounding threshold of $1 million. The trust funds and other dedicated funds in Treasury’s Combined Statement are spread across all 29 major departments that are reported separately in the statement (see figure 2). Each department has at least two such accounts. The distribution of the number of trust fund or other dedicated fund accounts across federal agencies does not correspond with the balances held by these accounts. For example, the Social Security Administration has only four such accounts, but those four funds together held $2.9 trillion—more than double the balances of any other agency at the end of fiscal year 2018 (see figure 3). In contrast, the Department of the Interior had the greatest number of trust funds and other dedicated funds, but these 118 funds together held $14.9 billion, which is less than 1 percent of the total balances held in these types of accounts at the end of fiscal year 2018. Total Trust Fund and Other Dedicated Fund Balances Grew 13 Percent from Fiscal Year 2014 to Fiscal Year 2018 The total balance in federal trust funds and other dedicated funds grew about 13 percent in nominal terms from fiscal year 2014 to fiscal year 2018. The five accounts that contributed the most to this overall growth are listed in table 3. Fund balances are affected by complex interactions of various economic, demographic, and programmatic factors, but these changes are reflective of some overarching trends. For example, the balances of civilian and military pension and benefit programs increased, in part reflecting agency and employee contributions to fund the ongoing accrual of benefits by civilian and military personnel. Treasury has also contributed to these accounts to help fund some of the benefits accrued in the past. Some of the other increases were a result of economic changes experienced during this time period such as declines in the unemployment rate, among other things. For example, both Social Security’s Federal Old-Age and Survivors Insurance Trust Fund (OASI) and the Unemployment Trust Fund are funded primarily by payroll taxes, which tend to gather more revenue during periods when employment goes up and wage growth increases. While the net change in total trust fund and other dedicated fund balances was positive from fiscal year 2014 to 2018, not all trust fund and other dedicated fund balances grew over the time period. The five accounts that experienced the largest balance decreases are listed in table 4. From fiscal year 2014 to 2018, the average trust fund and other dedicated fund balance decrease was less than the average balance increase, and a greater number of accounts increased than decreased over the period. About 28 percent of the 398 accounts in our scope had individual balances that changed less than $5 million over the time period (see table 5). The higher total balance in trust funds and other dedicated funds indicates an overall surplus—income exceeding outgo—from fiscal year 2014 to 2018, which could suggest that the federal government intends to dedicate more resources to these specified purposes. Neither the increased total balance nor an individual fund’s balance increase is a signal that any individual fund is on sound financial footing. Similarly, a decreasing balance does not necessarily signal that any individual fund is not on sound financial footing. Assessing the future outlook for some of these funds and programs requires actuarial or other projections and can be subject to various degrees of inherent uncertainty. Not All Federal Trust Funds and Other Dedicated Funds Are Fully Supported by Dedicated Collections Dedicated Collections Are Not the Sole Source of Income for Trust Funds and Dedicated Funds Of our 13 case study accounts, 11 received income from general revenues in addition to their dedicated collections, either through a permanent appropriation or in an annual appropriation. The form, size, and purpose of income from general revenues that our case study accounts received varied greatly based on the design of the program. These accounts fall in to three basic types: those that received regular income from general revenues as a part of their program design, those that received intermittent general revenue income, and those that received income solely from their own dedicated collections. See appendix II for more detailed information about the income, outgo, investments, and current issues for each of these accounts. Regular Income from General Revenues as a Part of Program Design Eight of the case study accounts we examined regularly receive income from general revenues in addition to their dedicated collections. These general revenues are often for specific purposes that have been deemed public goods and are provided annually as a part of the program’s design. The Medicare Supplementary Medical Insurance trust fund sets medical insurance premium rates for Medicare Part B to cover 25 percent of expected costs for the year. The roughly 75 percent remaining expected program cost is funded through general revenue. The Medicare HI Trust Fund also regularly receives general revenues to reimburse the fund for the cost for certain uninsured beneficiaries, program management activities, and other miscellaneous activities. In fiscal year 2018, $1.6 billion in general revenue was transferred into the trust fund. Both the Civil Service Retirement and Disability Fund (CSRDF) program and the Federal Employees Health Benefits Fund receive contributions from both current employees and their employing agencies as their primary sources of income, but these accounts also receive some general revenue in addition to these dedicated collections. Treasury is required by law to transfer an amount annually to the CSRDF from the General Fund to subsidize in part the under- funding of the Civil Service Retirement System. The Civil Service Retirement System is closed to new participants but covers most federal employees who first entered a covered position prior to 1984. According to OPM officials, the Federal Employees Health Benefits program is funded about 30 percent by contributions from participants and about 70 percent by contributions from their employing agencies. OPM contributes the employer share of the premiums for most annuitants via an appropriation from general revenues. The U.S. Postal Service (USPS) receives annual appropriations from general revenues to fund mail for the blind and overseas absentee voting. These appropriations account for less than 0.1 percent of the total cash outlays of the Postal Service Fund. USPS received $58 million in appropriations for these activities in fiscal year 2018, when total outlays were $69 billion. The Social Security Trust Funds, both OASI and DI, receive reimbursements from general revenue for several distinct purposes, such as employee union expenses and the payroll tax holiday, among other things. The total appropriations for these two activities were about $23 million in fiscal year 2018. While the Airport and Airway Trust Fund primarily receives dedicated collections, it has received some appropriations from general revenue in recent years, and some of the programs it funds also receive regular appropriations from general revenue. The most prominent example is the operations and management account within the Federal Aviation Administration. While this account is funded mostly by transfers from the Airport and Airway Trust Fund, it also typically receives an annual appropriation from general revenues. In fiscal year 2018, the appropriation to the operations account was $1.36 billion, which was about 13 percent of the total budget authority in the account. Intermittent General Revenue Income Three of the case study accounts we examined were supported in part by general revenue income on an intermittent basis in recent years. These general revenues helped temporarily restore solvency to programs that were not designed to be fiscally sustainable. The Highway Trust Fund has received appropriations from general revenues as a part of its reauthorization process in recent years. The most recent reauthorization, provided $70 billion in general revenue to the Highway Trust Fund from fiscal year 2016 through fiscal year 2020. The appropriations have allowed outlays to exceed dedicated collections in most years without exhausting assets in the fund. The National Flood Insurance Fund had $16 billion of its debt canceled by the Additional Supplemental Appropriations for Disaster Relief Requirements Act, 2017. This cancellation converted a $16 billion liability of the fund to a cost borne by general revenues. However, the National Flood Insurance Program (NFIP) still owes $20.5 billion to Treasury. As we recently reported, NFIP likely will not generate sufficient revenues to cover its expenses and repay its outstanding debt because its premium rates do not reflect the full risk of loss. The Flood Insurance Reserve Fund did not directly benefit from the debt cancellation, but it did receive an indirect benefit since it was established as a reserve fund to help meet expected future obligations and repay debt. Income Solely from Dedicated Collections Two of our case study accounts did not receive income from general revenue in recent years. For both of these accounts, the agencies have some authority to adjust their dedicated collections to cover their projected costs. The flexibility to adjust income levels based on projections can help contribute to the sustainability of the funds. Although the Tennessee Valley Authority (TVA) was originally funded primarily by appropriations from Congress when it was established in 1933, TVA fulfilled its requirement to repay this investment in 2014 and currently collects enough revenue to cover its operating expenses. The TVA Board has the authority to determine rates for its electric power and the Tennessee Valley Authority Act of 1933 mandates that TVA keep rates as low as feasible while still collecting sufficient revenue. The Universal Service Fund (USF) does not receive income from general revenue. The Federal Communications Commission (FCC) has some flexibility to set the contribution factor, which determines the payments telecommunications carriers are required to make into the fund. FCC officials told us that they must set the rates at levels so that they collect enough in dedicated collections to cover the projected demand for the programs they have adopted. FCC sets the contribution factor quarterly to cover the projected cost of the USF programs for the upcoming quarter, up to the authorized level of spending for each program. Even funds that rely primarily on their dedicated collections may not be fiscally sustainable. For example, the Social Security OASI and DI, and Medicare HI trust funds do not receive income from general revenues to support benefit payments. However, projections show that their dedicated collections are expected to be insufficient to fully cover scheduled outlays in the next 7 to 33 years. Conversely, some accounts supported by the Airport and Airway Trust Fund received appropriations from general revenue in recent years. However, the Airport and Airway Trust Fund has received more in dedicated collections than are made available to outlay through appropriations. As such, the fund carries a balance that is unavailable without further appropriations action. At the end of fiscal year 2018, the total cash balance in the Airport and Airway Trust Fund was about $17 billion. CBO projects this balance to grow more than threefold over the next 10 years. Total Trust Fund and Special Fund Balances Are Projected to Start Decreasing in Fiscal Year 2022 Although overall federal trust and other dedicated fund balances grew over the past 5 fiscal years, this trend is not projected to continue. In CBO’s most recent trust fund projections, overall federal trust fund and special fund balances are projected to start declining in fiscal year 2022. CBO does not estimate projected balances for public enterprise funds. As shown in figure 4, the projected decline is largely explained by declines in the Social Security and Medicare fund balances. We have previously reported that demographic factors, such as an aging population and slower labor force growth, are contributing to a gap between Social Security program costs and revenues. According to the most recent Social Security Trustees Report, Social Security’s costs, on a combined OASI and DI basis, have exceeded its non-interest income since 2010 and are projected to exceed total income, including interest, starting in 2020. The Medicare and Social Security Trustees and CBO projections show that several major trust funds will deplete their assets in the next 3 to 33 years (see figure 5). If no action is taken, these trust funds are projected to be unable to fully support paying their projected obligations. Projected trust fund balances can provide a vital signaling function for policymakers about underlying fiscal imbalances in covered programs. However, program sustainability is ultimately determined by whether the government as a whole has the economic capacity to finance the claims on the trust funds at the cost of other competing priorities. The economic flexibility of the federal government may be limited as debt held by the public grows as a percentage of gross domestic product (GDP). Debt held by the public was $15.8 trillion—or 78 percent of GDP—at the end of fiscal year 2018. It is projected to surpass its historical high of 106 percent of GDP within 13 to 20 years, and climb to between about 250 to 500 percent by 2092. Further, neither the long-term projections of federal debt nor CBO’s trust fund balance projections include certain fiscal risks that could affect the federal government’s financial condition in the future. Fiscal risks, or fiscal exposures, are responsibilities, programs, and activities that may legally commit or create expectations for future federal spending. Many of the largest trust funds and other dedicated funds face fiscal risks that are highlighted in our High-Risk List due to the financial uncertainty they face. For example, USPS—USPS financial viability continues to be high-risk because USPS cannot fund its current level of services and financial obligations from its revenues. Pension Benefit Guaranty Corporation (PBGC)—PBGC’s liabilities exceeded its assets by about $51 billion as of the end of fiscal year 2018. PBGC’s financial future remains uncertain, due in part to a long- term decline in the number of traditional defined benefit plans and the collective financial risk of the many underfunded pension plans PBGC insures. NFIP—Emphasizing affordability has led to premium rates that in many cases do not reflect the full risk of loss and produce insufficient premiums to pay for claims. Highway Trust Fund (HTF)—The nation’s surface transportation system is under growing strain and the cost to repair and upgrade the system to meet current and future demand is estimated in the hundreds of billions of dollars. A sustainable solution would balance revenues to and spending from the HTF. Ultimately, major changes in transportation spending or in revenues, or in both, will be needed to bring the two into balance. The Medicare Program—Medicare continues to challenge the federal government because of its outsized impact on the federal budget and the health care sector as a whole, the large number of beneficiaries it serves, and the complexity of its administration. Federal spending for Medicare programs is expected to significantly increase in the coming years. As overall trust and special fund balances are projected to decrease, our projections and those from the Fiscal Year 2018 Financial Report of the United States Government, and CBO show that the federal government will have to borrow more from the public to offset the decrease in intragovernmental debt. We have reported that existing federal debt held by the public is already large by historical norms, and CBO has noted that large and growing amounts of federal debt held by the public would have negative long-term consequences for the economy and constrain future budget policy. To change the long-term fiscal path, policymakers will likely need to consider policy changes to the entire range of federal activities, both revenue and spending. Most Large Trust Funds and Other Dedicated Funds Have Mandatory Budget Authority and Support Entitlement Programs Nearly All Outgo from Trust Funds and Other Dedicated Funds Was Mandatory, Thus Available to Be Spent without Further Appropriation During fiscal year 2018, almost 98 percent of outgo (i.e., outlays and transfers to another government account) from trust funds and other dedicated funds was mandatory budget authority. This is greater than the proportion of total federal spending that is mandatory. According to OMB, during fiscal year 2018, mandatory spending made up 69.3 percent of all federal outlays while discretionary spending accounted for the remaining 30.7 percent. Seventy-six percent of trust funds and other dedicated funds had some mandatory budget authority (see table 6). Some funds have a mix of mandatory and discretionary budget authority. In general, the collections and balances of accounts with mandatory spending authority are available for obligation. Mandatory authority provides some flexibility for agencies because they do not have to await congressional action to incur obligations and make payments. For example, the Social Security Trust Funds have mandatory budget authority, which authorizes the program to continue to make payments to beneficiaries during lapses in appropriations. Although programs with mandatory authority need not go through the annual appropriations process, they are still subject to congressional oversight. In some cases Congress has set obligation limits in annual appropriations acts. For example, although the Crime Victims Fund has mandatory budget authority to obligate funds from its available balances, limits in annual appropriations acts have often capped the amount that may be obligated in each fiscal year. As a result, annual income has exceeded outgo and the balance of the fund had grown to $16.6 billion at the end of fiscal year 2018. Designation as mandatory or discretionary budget authority determines how budget control mechanisms are applied to the funds. Sequestration applies annually to mandatory spending, but certain budget authority is exempt or subject to special rules. Of the 13 case studies we reviewed, nine are exempt from cancellation under budget enforcement sequestration procedures and four—Medicare Supplementary Medical Insurance, Medicare Hospital Insurance, the HTF, and the Airport and Airway Trust Fund—are partially sequestrable (i.e., certain budgetary resources specified by law within the accounts are not subject to cancellation under budget enforcement sequestration procedures). For example, Social Security, Medicaid, and veterans’ compensation are completely exempt, and Medicare reductions are limited to 2 percent. Exemptions and special rules lead sequestration to affect some areas of the federal government more than others. For example, programs without exempt status, such as the Commodity Credit Corporation Fund, bear a greater reduction than they would if cuts were applied evenly to all programs. Outgo from those trust funds and other dedicated funds that do not have mandatory budget authority are controlled in the annual appropriations process and count toward the annual discretionary spending limits laid out in the Budget Control Act of 2011 (BCA). For example, outlays from the Airport and Airway Trust Fund are discretionary. This means that the outlays for capital improvements and operations of the nation’s airport and airway system, except for airport grants, count toward government- wide discretionary spending limits. Some trust funds and other dedicated funds have a combination of budget authorities, which can affect balances. For example, the Harbor Maintenance Trust Fund (HMTF), which is supported through collections of the harbor maintenance fee, has mandatory income and discretionary outlays. Historically, HMTF income has exceeded outgo and by the end of fiscal year 2018, the balance in the fund had grown to $9.3 billion. Any proposed legislation to lower the fee revenues would require an offset so as not to increase the deficit. Conversely, since the spending is subject to the discretionary caps, any increase in spending to align with program revenues would count toward the discretionary spending limits. Most spending from trust funds and other dedicated funds is mandatory and not controlled by the annual appropriations process. We have previously reported that the increase in mandatory spending has long- term implications for the nation’s fiscal outlook overall, including the growing federal debt. The federal government has previously enacted fiscal rules in the form of laws that constrain fiscal policy decisions, including BCA and PAYGO. These fiscal rules apply the same way regardless of status as a trust fund or other dedicated fund. However, in practice, fiscal rules that apply to mandatory budget authority are more relevant to these types of accounts, because mandatory budget authority is more concentrated in trust funds and other dedicated funds than it is in the federal budget as a whole. The Majority of the Largest Trust Funds and Other Dedicated Funds Are Entitlements—Legal Commitments Of the 23 largest trust funds and other dedicated funds we reviewed, 13 have entitlement authority, which legally requires payments to individuals or governments that meet the requirements of the programs (see table 7). For example, OASI beneficiaries are legally entitled to benefits based on a formula that takes into account the time they spent working and their earnings, among other factors. Some trust funds have mandatory budget authority, but not entitlement authority. For example, the USF, the National Flood Insurance Reserve Fund, and the Tennessee Valley Authority Fund all have mandatory budget authority, but have no entitlement authority. These programs have the most flexibility because their income is available without further appropriations action and their outgo is not driven by legal requirements to individuals or governments. For example, Federal Communication Commission officials told us that they manage the size of each program funded by the USF, to stay within an approved budget. Although entitlements represent a current legal commitment and trust funds and other dedicated funds demonstrate the government’s intent to restrict the use of those funds to a specific purpose, the government can change the terms of entitlement programs, including those financed through trust funds or other dedicated funds, by changing the substantive law. Congress and the President can raise or lower future trust fund collections or payments or change the purposes for which the collections can be used. For example, in 1983 a number of changes were made to the Social Security program, including an increase in the full retirement age and a new tax on a portion of Social Security benefits, which increased collections and lowered future outgo. Agency Comments and Our Evaluation We provided a draft of this report and the online dataset to the Director of OMB and the Secretary of the Treasury for review and comment. We also provided a draft of this report and the online dataset to our case study agencies: the Centers for Medicare & Medicaid Services, the Federal Communications Commission, the Federal Emergency Management Agency, the Department of Transportation (for the Federal Aviation Administration and the Federal Highway Administration), the Office of Personnel Management, the Social Security Administration, the Tennessee Valley Authority, and the U.S. Postal Service for review and comment. The Social Security Administration and the U.S. Postal Service provided written responses thanking us for providing the opportunity to review the report, which are published in appendixes III and IV. The Centers for Medicare & Medicaid Services, the Federal Communications Commission, the Department of Transportation, the Office of Personnel Management, the Tennessee Valley Authority, and the U.S. Postal Service provided technical comments, which we incorporated as appropriate. OMB, Treasury, and the Federal Emergency Management Agency reviewed our draft report and had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies in our review, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Tranchau (Kris) T. Nguyen at (202) 512-6806 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines: (1) how the size and scope of federal trust funds and other dedicated funds in the federal budget have changed over time, (2) the extent to which federal trust funds and other dedicated funds are supported by their dedicated collections, and (3) the extent to which federal trust funds and other dedicated funds support mandatory programs, including major entitlement programs. To examine trends in the size and scope of federal trust funds and other dedicated funds, we used Office of Management and Budget (OMB) budget data to identify the income, outgo (i.e., outlays and transfers to another government account), and end of year balances for all revolving trust funds, special funds, non-revolving trust funds, and public enterprise funds reported in OMB’s budget database, OMB MAX, for fiscal years 2014 to 2018 in nominal terms. We excluded financing and credit accounts because they are non-budgetary. For the majority of these data we used the amounts reported in OMB MAX schedule J, which is used to produce the Status of Funds tables in the President’s Budget Appendix. While the list of accounts that report Status of Funds tables publicly in the budget is limited to 21 accounts, a schedule J is created in OMB MAX for all non-revolving trust funds and special funds and, for the years in our review, for all revolving trust funds. Schedule J data are not available for public enterprise funds, so we used guidance from OMB Circular No. A- 11 to approximate similar income, outgo, and balance data using data fields that are reported in the Program and Financing table. The public enterprise fund data are slightly different than the other fund types because borrowing authority as it is reported in OMB MAX only includes information on repayable advances and excludes information on outstanding debt and borrowing. We asked OMB staff to review our methodology to calculate these numbers and they agreed our approach was methodologically sound. To assess the reliability of OMB MAX data related to the income, outgo, and balances of trust fund and other dedicated fund accounts, we reviewed related documentation, interviewed knowledgeable OMB staff, and conducted electronic data testing. We found these data reliable for our purposes. OMB budget data are rounded to the nearest million and do not show funds with amounts less than $500,000. Accordingly, OMB instructs agencies to consolidate small trust fund accounts with larger general fund accounts so the total government-wide amounts will be complete. In addition, OMB sometimes reports trust fund groups under a single account rather than each individual trust fund account. Groups may include two or more trust funds with similar purposes. The Department of the Treasury (Treasury), on the other hand, tracks monies for each discrete account to the penny in order to fulfill its government wide accounting and cash management responsibilities. As such, we used data from the Treasury Fiscal Year 2018 Combined Statement of Receipts, Outlays, and Balances of the United States Government to provide a complete count of these funds, including accounts with small balances and accounts that are a part of groups. We interviewed Treasury officials, reviewed relevant documentation, and conducted electronic and manual testing to ensure the data were reliable for our purposes and concluded that they were. To examine the extent to which federal trust funds and other dedicated funds are supported by their dedicated collections, in addition to the data described above, we examined thirteen case study accounts in nine agencies. We selected a set of accounts to include the largest of each of the four types of trust funds and other dedicated funds and a variety of program designs (see table 8). We used gross outlays from fiscal year 2017 to identify the largest accounts, since that was the most recently available data at the time of the account selection. Overall, our selected accounts covered 88 percent of the total gross outlays among these types of accounts in fiscal year 2017. We also ensured that our set of case study accounts included: at least one account from each of the four fund types in our scope, a range of programs from different goals (e.g., infrastructure, insurance, federal employee benefits), and budget authority with different characteristics. The budget authority included in our case study selection represented examples of both mandatory and discretionary budget authority. We also ensured that budget authority from appropriations, borrowing authority, contract authority, and offsetting collections were represented in at least one case study. While the case studies were selected to capture the largest funds and a diversity of programs and funding characteristics, findings from the case studies cannot be generalized to all trust funds and other dedicated funds. We also reviewed agency financial, budget, and performance reports, Congressional Budget Office trust fund projections, the 2019 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds (Social Security Trustees), the 2019 Annual Report of the Board of Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds (Medicare Trustees), and our prior reports, and interviewed officials from each of the case study agencies. To examine the extent to which federal trust funds and other dedicated funds support mandatory programs, including major entitlement programs, we used OMB budget data to calculate the prevalence of discretionary budget authority, which is controlled through appropriations acts, and mandatory budget authority, which generally refers to budget authority provided through laws other than appropriations acts, in federal trust funds and other dedicated funds. OMB budget data do not systematically identify entitlement authority. To determine which of the largest trust funds and other dedicated funds have entitlement authority, we analyzed the authorizing statutes for our case study accounts and 10 additional accounts with the next largest gross outlays. While the entitlement analysis was designed to cover nearly all of the total outlays from these types of accounts, the findings from this analysis are not representative of all trust funds and other dedicated funds and cannot be generalized to the other 375 accounts in our scope. We also reviewed budget enforcement mechanisms, such as sequestration, that apply to these types of budget authority through review of relevant laws, our prior work, and OMB documents. We conducted this performance audit from October 2018 to January 2020 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Case Study Profiles To illustrate the variety of federal trust funds and other dedicated funds, and examine the extent to which they are supported by their dedicated collections, we examined 13 case study accounts in nine agencies. We selected accounts listed in table 9 to include the largest of each of the four types of trust funds and other dedicated funds and a variety of program designs. Each case study profile in this appendix includes income, outgo, investments, and current issues related to the account, as well as the following account information: Fund types. OMB and Treasury designate budget accounts as “trust funds” and other fund types that link dedicated collections with their expenditure based on legislation. The fund types in this appendix include: Non-revolving trust fund. An account designated as a “trust fund” by law that is credited with dedicated collections, which can often, but not always, be used without further appropriation action. Special fund. Analogous to a non-revolving trust fund but not classified as a trust fund in name. Revolving trust fund. An account designated as a “trust fund” by law that is credited with collections that are generally available for obligation without further appropriation action, to carry out a cycle of businesslike operations in accordance with statute. Public enterprise fund. Analogous to a revolving trust fund but not classified as a trust fund in name. A public enterprise fund is a type of revolving fund that carries out a cycle of businesslike operations, mainly with the public, in which it charges for the sale of products or services and uses the proceeds to finance its spending, usually without requirement for annual appropriations. Entitlement authority. Whether or not outgo from the fund is controlled by an entitlement authority, which is the authority to make payments to any person or government if the U.S. government is legally required to make the payments to persons or governments that meet the requirements established by law. The Budget Enforcement Act category. OMB’s designation as to whether the funds in the account are classified as discretionary or mandatory depending on the nature of the substantive legislation creating the fund. Discretionary. Budget authority provided in and controlled through appropriations acts. Mandatory. Budget authority provided through laws other than appropriations acts, and the outlays that result from such budget authority. Sequestration status. OMB’s designation of the authority for purposes of sequestration, which is the cancellation of budgetary resources under a presidential order. We defined the status categories as follows: Exempt. Accounts for which budgetary resources are exempt from cancellation under budget enforcement sequestration procedures. Sequestrable. Accounts for which budgetary resources are subject to cancellation under budget enforcement sequestration procedures. Partially Sequestrable. Accounts for which certain budgetary resources specified by law within the account are not subject to cancellation under budget enforcement sequestration procedures. Staff Acknowledgments In addition to the contact named above, Susan E. Murphy (Assistant Director), Katherine D. Morris (Analyst in Charge), Alicia Cackley, Janice Ceperich, Jacqueline Chapin, Steven Cohen, Michael Collins, James Cosgrove, Robert Dacey, Karin Fangman, Paul Foderaro, Carol Henn, James A. Howard, Susan J. Irving, Charles Jeszeck, Kenneth John, Heather Krause, Natalie Logan, Scott McNulty, John Mingus, Sally Moino, Tracie Sanchez, Lori Rectanus, Frank Rusco, Dawn Simpson, Frank Todisco, Peter Verchinski, and Alicia White made key contributions to this report.
Why GAO Did This Study Some of the largest federal programs, including Medicare, Social Security, and postal services, are funded through trust funds and other dedicated funds, which link collections that have been dedicated to a specific purpose with the expenditures of those collections. While these funds have the ability to retain accumulated balances, these collections do not necessarily fund the full current or future cost of the government's commitments to the designated beneficiaries. GAO was asked to review issues related to federal trust funds and other dedicated funds. This report examines (1) how the size and scope of federal trust funds and other dedicated funds in the federal budget have changed over time, (2) the extent to which these funds are supported by their dedicated collections, and (3) the extent to which these funds support mandatory programs, including major entitlement programs. GAO analyzed OMB data on trust funds and other dedicated funds for fiscal year 2014 through 2018 and the Department of the Treasury's (Treasury) Fiscal Year 2018 Combined Statement of Receipts, Outlays, and Balances . GAO also examined 13 case study accounts in nine agencies, selected to include the largest of each type of these funds and a variety of program designs. GAO reviewed agency reports, CBO trust fund estimates for 2018 and projections for 2019 to 2029, and prior GAO reports, and interviewed OMB staff and officials from Treasury and each of the case study agencies. GAO also is providing an online dataset of these funds at https://www.gao.gov/products/GAO-20-156 . What GAO Found Every major federal department has at least two trust funds or other dedicated funds. According to GAO analysis of Office of Management and Budget (OMB) data, balances in these funds, which can be used to support covered programs, grew 13 percent in nominal terms from fiscal year 2014 through 2018. Fund balances are affected by complex interactions of factors, but the total increase was driven largely by military and civilian retirement fund balances. The Congressional Budget Office (CBO) projects the total balance to start declining in fiscal year 2022 as decreases in Medicare and Social Security will exceed increases in military and civilian retirement balances. To offset the overall decrease, the federal government is projected to borrow more from the public. GAO found that 11 of 13 case studies recently received general revenue—collections that are not dedicated by law for a specific purpose. For example, medical insurance premiums for Medicare Part B are set to cover 25 percent of expected costs; the remaining 75 percent are covered by general revenues. Even funds that rely primarily on their dedicated collections may not be fiscally sustainable. For example, the Social Security Old-Age and Survivors Insurance Trust Fund only uses dedicated collections for benefit payments, but its balances are projected to be depleted by 2034. Nearly 98 percent of outlays and transfers from trust funds and other dedicated funds was through mandatory authority, which allows agencies to make payments without further congressional action. Most of the 23 largest funds also have entitlement authority, which generally requires payments to eligible parties based on legal requirements. Status as a trust fund, mandatory program, or entitlement does not prevent Congress and the President from changing related laws to alter future collections or payments.
gao_GAO-19-504
gao_GAO-19-504_0
Background NASA awarded firm-fixed-price contracts in 2014 to Boeing and SpaceX, valued at up to $4.2 billion and $2.6 billion, respectively, for the development of crew transportation systems that meet NASA requirements and for the initial service missions to the ISS. Figure 1 shows the spacecraft and launch vehicles for Boeing and SpaceX’s crew transportation systems. These contracts encompass the firm-fixed-price design, development, test, and evaluation work needed to support NASA’s certification of the contractors’ spacecraft, launch vehicle, and ground support systems and begin operational missions to the ISS. The Commercial Crew Program manages two processes in order to support the contractors’ uncrewed test flight, crewed test flight, and certification milestone. The contractors must submit evidence, which the Commercial Crew Program must review and approve for both processes. A three-phased safety review process informs the program’s quality assurance activities and is intended to ensure that the contractors have identified all safety-critical hazards and implemented associated controls prior to the first crewed test flight. In phase one, the contractors identify risks in their designs and develop reports on potential hazards, the controls they put in place to mitigate them, and explanations for how the controls will mitigate the hazards. In phase two, the program reviews and approves the contractors’ hazard reports and develops strategies to verify and validate that the controls are effective. In phase three, the contractors will conduct the verification activities and submit the hazard reports to the program for approval. The verification closure notice process is used to verify that the ISS requirements, applicable to any spacecraft flying to the ISS, and Commercial Crew Program requirements. After the contractor has successfully completed its uncrewed and crewed test flights and the above processes, the program determines at the contractor’s certification milestone whether the crew transportation system meets NASA’s requirements for human spaceflight. Following this contract milestone is an agency certification review, which authorizes the use of a contractor’s system to transport NASA crew to and from the ISS. It is at this point that the contractors can begin operational missions. Figure 2 shows the path leading to operational missions. Contractors Are Making Progress on Vehicles, but Certification Date Remains Unclear Both contractors have made progress building and testing hardware, including SpaceX’s uncrewed test flight. But continued schedule delays and remaining work for the contractors and the program create continued uncertainty about when either contractor will be certified to begin conducting operational missions to the ISS. The program has made progress reviewing the contractors’ certification paperwork, but contractor delays in submitting evidence for NASA approval may compound a ‘bow wave’ of work, which creates uncertainty about when either contractor will be certified. NASA acknowledged the schedule uncertainty in February 2019, when it announced plans to purchase two additional Soyuz seats from Russia, citing concerns about the difficulties associated with achieving first flights in the final year of development. Construction and Testing of Contractors’ Hardware Is Progressing Both contractors are building several spacecraft, some of which are near completion. Each contractor’s spacecraft includes two main modules: Boeing’s spacecraft—CST-100 Starliner—is composed of a crew module and a service module. The crew module will carry the crew and cargo. It also includes communication systems, docking mechanisms, and return systems for Earth landing. The service module provides propulsion on-orbit and, if needed, in abort scenarios—when a failure prevents continuation of the mission and a return is required for crew survival—as well as radiators for thermal control. SpaceX’s spacecraft—Dragon 2—is composed of a capsule, which we refer to as the crew module, and a trunk, which we refer to as the support module. The crew module will carry the crew and cargo. It also includes avionics, docking mechanisms, and return systems for a water landing. The support module includes solar arrays for on-orbit power and guidance fins for escape abort scenarios. Different spacecraft will be used for the uncrewed test flight and the crewed test flight, as well as to support other test events. See table 1 for a description of each contractor’s hardware builds, current status, and upcoming events. Additional details on select hardware testing follow. In June 2018, Boeing experienced an anomaly while testing its launch abort engines. During a test firing, four of the eight total valves in the four launch abort engines failed to close after a shutdown command was sent. In response to this event, Boeing initiated an investigation to identify the root cause. According to Boeing officials, Boeing plans to replace components on all of its service modules except for the uncrewed test flight service module. This is because the abort system will not be active for the uncrewed test flight. Boeing plans to resume testing its launch abort engines in May 2019. A NASA official told us that addressing this anomaly and identifying its root cause resulted in a 12-month schedule delay to launch abort propulsion system testing. In March 2019, SpaceX conducted its uncrewed test flight, which demonstrated that the capsule could dock with the ISS and return to Earth. NASA officials described SpaceX’s uncrewed test flight as a success with key systems such as the guidance, navigation, and control and the parachutes performing as expected. A SpaceX official told us that this was a very successful test and represented significant risk reduction from a schedule and technical perspective. Subsequently, the spacecraft used in the uncrewed test flight was destroyed in a testing anomaly. The anomaly occurred during a test that SpaceX was conducting in advance of an in-flight abort test scheduled for this summer. As of May 2019, SpaceX was investigating the anomaly. Repeated Delays and Remaining Work Create Continued Uncertainty for Certification Continued schedule delays create uncertainty about when NASA will certify either contractor to begin conducting operational missions to the ISS. We have previously found that the contractors’ schedules regularly changed, and this pattern continues. As of May 2019, both contractors have delayed their certification milestone nine times since establishing dates in their original contracts. In the span of less than a year, since our July 2018 report, Boeing has again delayed its certification milestone four times and by 12 months, while SpaceX has again delayed its certification milestone three times and by 7 months. Both contractors are now planning for certification to occur more than 2 years beyond the original dates in their contracts–Boeing in January 2020 and SpaceX in September 2019, though this date is under review and could further slip (see figure 3). Over time, both program and contractor officials have told us that they struggle to establish stable schedules. In 2018, the Commercial Crew Program manager told us that she relied on her previous experience to estimate schedule time frames as opposed to relying on the contractors’ schedules, which were overly optimistic. In March 2019, a senior NASA official told us that the agency has struggled to establish schedules with both contractors, often needing to negotiate dates with senior company officials. Further, SpaceX officials explained that they would not know the schedule for the crewed test flight until they conducted the uncrewed test flight. However, even having conducted the uncrewed test flight in March 2019 and before the April 2019 anomaly, SpaceX and NASA were still re- evaluating the schedule for the crewed test flight. Contractors’ Technical Risks Create Continued Uncertainty for Certification Both contractors are continuing to mitigate technical risks identified by program officials that need to be addressed in order to reach certification. The program will close a risk when the contractor is able to fully mitigate it. If all mitigation activities are exhausted, but a risk still remains, the program will determine if the risk is acceptable as part of the agency’s rationale for flight. As the contractors address these technical risks and proceed through integration and testing, any issues that arise during testing or the test flights could further delay certification. Program risks for Boeing include: Parachute System Certification. Boeing is conducting five parachute system qualification tests to demonstrate that its system meets the Commercial Crew Program’s requirements, which will be validated on two spacecraft flight tests. However, in August 2018, Boeing identified a faulty release mechanism for its drogue parachute—which initially slows down the capsule—during its third parachute qualification test that successfully deployed all parachutes. Identifying and fixing the faulty mechanism delayed its fourth parachute qualification test. According to a NASA official, Boeing is conducting testing to qualify an alternative design, and Boeing must qualify this alternative design before the crewed test flight. Launch Vehicle Engine Anomaly. Boeing is addressing a safety risk related to a launch vehicle component. Specifically, during a 2018 launch, the launch vehicle engine position during ascent deviated from commands but the launch vehicle provider stated that it achieved all mission objectives. Program officials told us that they have insight into the launch vehicle manufacturer’s ongoing investigation and have participated in a separate independent review team. Boeing will implement a set of corrective actions for the uncrewed test flight, and will continue testing the engines for the crewed test flight. Spacecraft-Generated Debris. Boeing is addressing a risk that under normal operating procedures the initiators that trigger separation events, such as the separation of the crew and service module prior to re-entry, may generate debris and damage the spacecraft. These components function as expected, but Boeing plans to install hardware to contain debris generated when the initiators fire. Program officials told us that they believe Boeing has identified a solution that will be sufficient for the uncrewed and crewed test flights, but the program is continuing to explore a possible redesign for future operational missions. Spacecraft Forward Heat Shield. We had previously found that Boeing was addressing a risk that during descent a portion of the spacecraft’s forward heat shield may re-contact the spacecraft after it is jettisoned and damage the parachute system. Since our last report, Boeing tested the performance of the forward heat shield in worst-case scenarios and found there was no damage to the parachute system or the spacecraft. After reviewing test data, the program determined that Boeing had completed the mitigation activities and, as of February 2019, no additional steps were needed. Program risks for SpaceX include: Parachute System Certification. Like Boeing, SpaceX is conducting several parachute tests to demonstrate that its system meets the Commercial Crew Program’s requirements. However, SpaceX experienced two anomalies with its parachute system in August 2018. As a result, a SpaceX official told us they enhanced the parachute design to improve robustness. NASA officials told us SpaceX’s enhanced parachutes performed well on its uncrewed test flight. Prior to the crewed test flight, SpaceX must demonstrate the performance of its parachute system. SpaceX plans to continue to test its parachutes, and according to a SpaceX official, will take all steps necessary to ensure that the flight design meets or exceeds minimum performance levels. Propellant Loading Procedures. SpaceX is continuing to address a safety risk related to its plans to conduct launch vehicle propellant loading procedures after the astronauts are on board the spacecraft. SpaceX officials told us that this loading process has been used in other configurations for multiple SpaceX flights. The Commercial Crew program has approved SpaceX’s proposed loading procedures, including the agreed upon demonstration of the loading procedure five times from the launch site in the final crew configuration before the crewed test flight. The five events include the uncrewed test flight and in-flight abort test. As of March 2019, SpaceX had completed the first two events. Redesigned Composite Overwrap Pressure Vessel. SpaceX is continuing to address a risk that its launch vehicle’s redesigned composite overwrap pressure vessel, which is intended to contain helium under high pressure, may serve as an ignition source. The program and SpaceX conducted tests on the redesigned vessel and the program determined that all possible ignition sources, with one exception, have a low likelihood of creating ignition. The program continues to assess this ignition source. According to a NASA official, there were no indications of any issues during SpaceX’s uncrewed test flight. SpaceX officials also told us that the redesigned vessel has successfully flown on multiple flights. The program will need to determine whether to accept the risk associated with this technical issue prior to SpaceX’s crewed test flight. Engine Turbine Cracking. NASA continues to assess a SpaceX risk related to the design of its launch vehicle engines, which has previously resulted in the turbine wheel cracking. To mitigate the turbine cracking risk, SpaceX conducted additional qualification testing and developed an operational strategy that resulted in no cracks. Consequently, the program accepted this risk for SpaceX’s uncrewed test flight but levied a constraint on the crewed test flight. Specifically, SpaceX has agreed to conduct a follow-on test campaign of the engines to demonstrate that it meets NASA’s standards in order to launch its crewed test flight. Program officials said SpaceX plans to build the launch vehicle engines for its crewed test flight concurrently with this follow-on testing series. Program Office Workload Is a Continued Schedule Risk to Certification The Commercial Crew Program’s ability to process certification data packages for its two contractors continues to create uncertainty about the timing of certification. Specifically, the program is concurrently reviewing and approving both contractors’ phased safety reviews and verification closure notices. We previously reported that program officials, the contractors, and independent review organizations had concerns about a “bow wave” of work for the program. For example, at that time, the program’s safety and mission assurance office identified the upcoming bow wave of work in a shrinking time period as a top risk to achieving certification. Three-Phased Safety Reviews. The program continues to make progress conducting its phased safety reviews, but it has not yet completed them. In February 2017, we found that the program was behind schedule completing its phased safety reviews and, as of April 2019, it had yet to complete this process. As shown in Table 2, the program is near completion of phase two reviews and phase three reviews are in progress. Program officials told us that they have started work on many of the phase three safety reviews, but the data only reflect their efforts once they complete a phased safety report in its entirety. Any additional delays to complete this process, however, would delay the crewed test flights and create uncertainty about when NASA will certify the contractors to begin operational flights. Verification Closure Notices. NASA has made progress verifying that the contractors have met ISS and Commercial Crew Program requirements, but much work remains. When a contractor is ready for NASA to verify that it has met a requirement, such as that the contractor’s system can detect and alert the crew to critical faults that could result in a catastrophic event, the contractor submits data for NASA to review through a verification closure notice. Table 3 shows the agency’s progress approving verification closure notices for each contractor. Program officials told us that, because the contract solicitation did not require an uncrewed test flight, they had not previously determined the minimum number of Commercial Crew Program requirements that the contractors should meet prior to an uncrewed test flight. Subsequently, both contractors included an uncrewed test flight as part of their schedules. As these test flights approached, NASA determined that it must verify that the contractors met approximately 20 percent of the program’s requirements before the contractors’ uncrewed test flight and the remaining 80 percent before the contractors’ crewed test flights. The program made this determination based on ensuring the contractors met requirements related to the spacecraft safely approaching and docking to the ISS; ensuring the safety of the ISS and its crew; and meeting any mission-specific requirements for cargo. Both contractors originally planned for the program to verify they had met more than 20 percent of the Commercial Crew Program requirements before the uncrewed test flight but have subsequently changed their plans. For both contractors, the program is allowing the contractors to submit more verification closure notices between the uncrewed and crewed test flight than initially envisioned. Program officials told us that contractors proposed deferring the submission of verification closure notices because they were having difficulties meeting the original targets. Figure 4 includes SpaceX and Boeing’s original and current plans for verification of requirements compared to the Commercial Crew Program’s minimum level of requirements it determined was necessary for the uncrewed test flight. As reflected in the figure, these new plans, which defer submission of work to the crewed test flight, may compound the program’s bow wave of work and create uncertainty about the timing of certification. Further, the Commercial Crew Program will need to reassess a subset of requirements closed for the uncrewed test flight prior to the crewed test flight. For example, of the 78 requirements Boeing plans to close prior to the uncrewed test flight, the program will re-assess 16; for SpaceX’s 49 requirements, the program will re-assess 32. Program officials told us that some of this work is expected based on known changes to the contractors’ systems between the uncrewed and crewed test flight. For example, officials told us that they approved a verification closure notice for SpaceX’s air conditioning system in order to support the uncrewed test flight, but they know that they will need to re-assess it because SpaceX is making changes before its crewed test flight. While these types of changes and those that are identified through testing are not uncommon, they further add to the program’s workload and create uncertainty about the timing of certification. Among the requirements that must be closed before the crewed test flight is loss of crew, which is a metric that captures the probability of death or permanent disability to one or more crew members. According to program risk charts, the program’s top safety risk continues to be that neither contractor will meet the contractual requirement of a 1 in 270 probability of incurring loss of crew. We previously found that NASA lacked a consistent approach for how to assess loss of crew and recommended that key parties, including the program manager, collectively determine and document how the agency will determine its risk tolerance level prior to certifying either contractor. NASA partially concurred with that recommendation, stating that, if neither contractor can meet the loss of crew requirement, the program will request a waiver through the human rating certification process to ensure transparency. As of March 2019, NASA officials told us they have not taken steps to address this recommendation. Officials told us that the Commercial Crew Program is currently reviewing Boeing’s loss of crew verification closure notice and SpaceX’s draft verification closure notice in order to verify if the contractors have met the loss of crew requirement. According to program officials, one of the biggest challenges for the program is balancing its workload to support the two contractors, but officials are making an effort to review each contractor’s data products as they are submitted. For example, program officials told us that they were able to review SpaceX submissions during the summer of 2018, while Boeing’s submissions slowed as it focused on addressing the test anomaly with its launch abort engines. However, based on current schedules, the program must complete its reviews of certification paperwork while supporting uncrewed, crewed, and abort system test flights for both contractors before the end of 2019. Both contractors said they have concerns about NASA’s ability to maintain its pace of processing paperwork in order to support the contractors’ planned test flights and certification dates. The potential bow wave of work continues to create uncertainty about the timing of certification for either contractor, which could result in delays to the first operational mission to the ISS. NASA Is Taking Steps to Mitigate Delays to Start of Operational Missions In February 2019, NASA announced plans to buy two more Soyuz seats from Russia, thereby acknowledging that delays to certification of the Commercial Crew Program contractors could continue. These seats would extend U.S. access to the ISS from November 2019 through September 2020. According to a senior NASA official, NASA is not purchasing a new Soyuz spacecraft, which we have previously found requires a 3-year lead time. Instead, two additional seats became available on existing vehicles after changes to the Soyuz manifest. In 2015, NASA paid approximately $82 million per seat through its contract with the Russian Federal Space Agency (Roscomos). Program officials stated they could not publicly disclose the price NASA paid for these two new additional seats, but noted that the cost was 5 percent higher per seat than the previous contract modification to purchase Soyuz seats and is consistent with inflation. In addition, NASA plans to extend the duration of Boeing’s crewed test flight. In March 2018, NASA modified its contract with Boeing to allow NASA to add a third crew member and extend the length of the crewed test flight. In July 2018, we reported that NASA was considering this option as one way to maintain a U.S. presence on the ISS, but noted it had limited usefulness if Boeing’s crewed test flight slipped past the return date of the last Soyuz flight. NASA’s actions—purchasing the two additional Soyuz seats and implementing an extended duration crewed test flight for Boeing—do not fully address our July 2018 recommendation to develop and maintain a contingency plan for ensuring a presence on the ISS until a Commercial Crew Program contractor is certified. NASA concurred with this recommendation but, to fully implement it, NASA needs to provide additional support regarding planning efforts to ensure uninterrupted access to the ISS if delays with the Commercial Crew Program contractors continue beyond September 2020. Continued NASA attention on this issue is needed given the uncertainty associated with the final certification dates. Agency Comments We provided a draft of this product to NASA for comment. In its response, reproduced in appendix I, NASA generally agreed with our findings and included an update on the progress made by Boeing and SpaceX. NASA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to NASA Administrator and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the National Aeronautics and Space Administration Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Cristina T. Chaplain at (202) 512-4841 or chaplainc@gao.gov. Staff Acknowledgments In addition to the contact named above, Molly Traci, Assistant Director; Lorraine Ettaro; Laura Greifner; Kurt Gurka; Joy Kim; Christopher Lee; Katherine Pfeiffer; Roxanna T. Sun; Hai Tran; Kristin Van Wychen; and Alyssa Weir made significant contributions to this report.
Why GAO Did This Study In 2014, NASA awarded two firm-fixed-price contracts to Boeing and SpaceX, worth a combined total of up to $6.8 billion, to develop crew transportation systems and conduct initial missions to the ISS. In July 2018, GAO found that both contractors continued to delay their certification dates and that further delays were likely. NASA must certify the contractors' crew transportation systems before the contractors can begin operational missions to the ISS. The contractors were originally required to provide NASA all the evidence it needed to certify that their systems met its requirements in 2017. The House Committee on Appropriations included a provision in its 2017 report for GAO to continue to review NASA's human space exploration programs. This is the latest in a series of reports addressing the mandate. This report examines the extent to which the Commercial Crew Program and its contractors have made progress towards certification. To do this work, GAO analyzed contracts, schedules, and other documentation and spoke with officials from the Commercial Crew Program, Boeing, and SpaceX. What GAO Found Both of the Commercial Crew Program's contractors, Boeing and SpaceX, have made progress on their crew transportation systems. However, neither is ready to begin carrying astronauts into space as both continue to experience delays to certification. Certification is a process that the National Aeronautics and Space Administration (NASA) will use to ensure that each contractor's spacecraft, launch vehicle, and ground support systems meet its requirements for human spaceflight before any operational missions to the International Space Station (ISS) can occur. Factors contributing to schedule uncertainty include: Fluctuating schedules. As the contractors continue to build and test hardware—including SpaceX's March 2019 uncrewed test flight— their schedules for certification change frequently. As of May 2019, both contractors had delayed certification nine times, equating to more than 2 years from their original contracts (see figure). This includes several delays since GAO last reported in July 2018. Program Workload. NASA's ability to process certification data packages for its two contractors continues to create uncertainty about the timing of certification. The program has made progress conducting these reviews but much work remains. In addition, the program allowed both contractors to delay submitting evidence that they have met some requirements. This deferral has increased the amount of work remaining for the program prior to certification. In February 2019, NASA acknowledged that delays to certification could continue, and announced plans to extend U.S. access to the ISS through September 2020 by purchasing seats on the Russian Soyuz vehicle. However, this arrangement does not fully address GAO's July 2018 recommendation to develop a contingency plan for ensuring access to the ISS until a Commercial Crew Program contractor is certified. NASA concurred with the recommendation but has not yet implemented it. Continued NASA attention on this issue is needed given the uncertainty associated with the final certification dates. What GAO Recommends GAO continues to believe that NASA should develop a contingency plan to ensure uninterrupted access to the ISS if delays persist beyond September 2020. NASA generally agreed with GAO's findings.
gao_GAO-20-290T
gao_GAO-20-290T_0
Background Together, Executive Order 12898 and the 2011 MOU include eight areas that agencies’ environmental justice efforts should address, as appropriate, including promoting enforcement of all health and environmental statutes in areas with minority populations and low-income populations and ensuring public participation. Executive Order 12898 did not create new authorities or programs to carry out federal environmental justice efforts. As a result, federal environmental justice efforts seek to use existing federal laws, programs, and funding to address environmental and health problems that disproportionately burden minority and low-income communities, such as exposure to environmental pollutants. Such existing laws include the following: Environmental laws. Several environmental laws regulate pollutants in the air, water, or soil and generally require a regulated facility to obtain permits from EPA or a state. For example, under the Clean Air Act, EPA, along with state and local government units and other entities, regulates air emissions of various substances that harm human health. These laws also authorize the issuance of administrative orders, among other things, to require cleanup of contamination. NEPA. Under NEPA, federal agencies must evaluate the environmental impacts of their proposed major federal actions using an environmental assessment or a more detailed environmental impact statement, with some exceptions. Civil Rights Act of 1964. Title VI of the Civil Rights Act of 1964, as amended, prohibits discrimination based on race, color, or national origin in programs or activities that receive federal financial assistance. To carry out and enforce the provisions of the act, federal agencies have developed programs to receive and investigate allegations of discriminatory actions taken by recipients of federal funding. Working Group Agencies Reported Taking Some Environmental Justice Actions, with Limited Resources Most working group member agencies reported planning and implementing some actions to identify and address environmental justice issues. Some examples of key activities include the following: EPA mapping tool. In 2015, EPA released its Environmental Justice Mapping and Screening Tool (EJSCREEN), a web-based mapping tool that includes environmental and demographic data at a local level. Users can identify potential exposure to environmental pollutants and related health risks across different communities. Officials from the Department of Justice told us they regularly use EJSCREEN to help determine whether cases involve environmental justice issues. Incorporating environmental justice in NEPA analyses. At least 13 agencies provided examples of efforts to consider environmental justice in their NEPA analyses. At the Department of the Interior (DOI), departmental policy requires all bureaus to include consideration of environmental justice in the NEPA process, and some bureaus have developed their own guidance for doing so. For example, DOI’s 2015 National Park Service NEPA Handbook requires that the agency’s environmental analyses discuss and evaluate the impact of proposals on minority and low-income populations and communities. The Department of Homeland Security also issued an agency-wide directive on NEPA implementation in 2014, and the accompanying 2014 NEPA instruction manual included public involvement requirements for populations with environmental justice issues. Data initiative and reports on chemical exposure. At the Department of Health and Human Services (HHS), the Centers for Disease Control and Prevention (CDC) built a National Environmental Public Health Tracking Network, which brings together health and environmental data from national, state, and city sources. The CDC also developed a National Report on Human Exposure to Environmental Chemicals—a series of reports that uses biomonitoring to assess the U.S. population’s exposure to environmental chemicals. As we reported in September 2019, for fiscal years 2015 through 2018, 11 of the 16 member agencies of the working group reported supporting environmental justice efforts through existing related program funding and staffing resources (i.e., resources not specifically dedicated to environmental justice, such as for civil rights or environmental programs). EPA and the Department of Energy (DOE) dedicated resources specifically for environmental justice efforts in their budgets. In fiscal year 2018, EPA provided about $6.7 million and DOE provided about $1.6 million. Progress toward Environmental Justice Is Difficult to Gauge Agencies’ progress in identifying and addressing environmental justice issues related to their missions is difficult to gauge because most of the agencies do not have updated strategic plans and have not reported annually on their progress or developed methods to assess progress. Most Agencies Have Strategic Plans with Goals but Have Not Recently Updated Them As we reported in September 2019, 14 of the 16 agencies issued environmental justice strategic plans after they signed the 2011 MOU agreeing to develop or update such plans. Of the 14 agencies that issued their plans, 12 established strategic goals in these plans. Six of the 14 agencies further updated their plans in 2016 or 2017, and another agency published updated priority areas on its website. The Department of Defense (DOD), which issued a plan in 1995, has not updated it since, and the Small Business Administration (SBA) has never issued a plan. DOD officials said that the agency has not prioritized environmental justice efforts. SBA officials said the agency is uncertain whether it has a role in implementing environmental justice and they were reviewing whether SBA should continue its membership in the working group. The 2011 MOU directs agencies to update their strategic plans periodically, and leading practices for strategic planning suggest that strategic plans should be updated every 4 years. We have previously reported that strategic planning serves as the starting point and foundation for defining what an agency seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in achieving goals and objectives. In our September 2019 report, we recommended that eight agencies update their environmental justice strategic plans. Four agencies agreed, three did not state if they agreed or disagreed, and one disagreed. Education stated that it does not believe this is the most appropriate course of action for the department or an efficient use of resources, but we continue to believe they should implement the recommendation. Most Agencies Have Not Consistently Issued Progress Reports and Do Not Have Methods to Assess Progress As we reported in September 2019, 12 of the 16 agencies developed environmental justice strategic plans with strategic goals, but most of the agencies have not shown clear progress toward achieving these goals and the purpose of the executive order. It is difficult to gauge the agencies’ progress for three primary reasons: 1. The agencies have not comprehensively assessed how environmental justice fits with their overall missions. Seven of the 14 agencies that developed environmental justice strategic plans assessed and discussed how their environmental justice efforts aligned with their overall missions after 2011. However, the other seven agencies did not clearly show how their efforts aligned with their missions. We recommended that EPA, as chair of the working group, should develop guidance for the agencies on what they should include in their environmental justice strategic plans. EPA agreed with this recommendation. 2. The agencies have not consistently issued annual progress reports. Fourteen agencies issued at least one progress report after 2011, but most have not issued such reports every year, as they agreed to do in the 2011 MOU. The departments of Homeland Security and Justice issued progress reports every year from 2012 through 2017. The General Services Administration issued progress reports every year through 2015 and then issued one progress report covering fiscal years 2016 through 2018. Several other agencies consistently reported in the first few years after 2011 but then stopped issuing reports. DOD and SBA have not issued any progress reports. We have found that annual program performance reports can provide essential information needed to assess federal agencies’ performance and hold agencies accountable for achieving results. We recommended that 11 agencies report on their progress annually. Five of the agencies agreed with this recommendation, one partially agreed, three did not state if they agreed or disagreed, and two said they did not agree. Education stated that it does not believe this is the most appropriate course of action for the department or an efficient use of resources, and DOD stated that it did not see a tangible benefit to additional reporting. We continue to believe that they should implement the recommendation. 3. Most agencies have not established methods for assessing progress toward goals. The agencies’ progress reports generally describe the environmental justice activities they conducted but do not include any methods to assess progress (e.g., performance measures). For the 14 agencies that issued at least one progress report since 2011, we reviewed the most recent report and found that each report contained information on activities that agency undertook over the previous year. However, our analysis showed that most of the agencies had not established a method that would allow them to assess their progress toward their environmental justice goals, such as tracking performance measures or milestones. Of the 16 agencies that signed the 2011 MOU, four—the Departments of Agriculture (USDA), Health and Human Services (HHS), and DOI and EPA—have established performance measures or milestones for their environmental justice efforts. Of these four, HHS and EPA have reported on their progress toward achieving their performance measures or milestones. The other 12 agencies have not established any performance measures or milestones. The executive order directs the working group to provide guidance to agencies in developing their environmental justice strategies. However, the working group has not provided such guidance on methods to assess and report on environmental justice progress, according to EPA officials. According to these officials, EPA is still pursuing its own agency-wide performance measures. We recommended that EPA, as chair of the working group, develop guidance or create a committee of the working group to develop guidance on methods the agencies could use to assess progress toward their environmental goals. EPA agreed with this recommendation. Working Group Has Coordinated to Some Extent but Does Not Have a Strategic Approach or Full Participation We found that the interagency working group has coordinated to some extent but does not have a strategic focus or full participation by all the federal agencies. Executive Order 12898 directed the working group to coordinate in seven functions, including to assist in coordinating data collection and examine existing data and studies on environmental justice. In 2016, the working group released its Framework for Collaboration, which describes how it planned to provide guidance, leadership, and support to federal agencies in carrying out environmental justice efforts. The working group has collaborated to develop and issue guidance on several topics, participated in a variety of public meetings to provide information and opportunities for communities to discuss environmental justice issues, and coordinated ways in which the 16 member agencies and the Council on Environmental Quality (CEQ) could assist communities. For example, the working group created nine committees, including on Native American and Indigenous Peoples, Rural Communities, and Climate Change, based on the seven functions in the executive order and on public input, to help carry out its environmental justice responsibilities under the executive order. Officials from 13 member agencies agreed to either chair or become members of one or more committees. Through these committees, among other things, the working group has released a number of documents to help guide federal efforts: A compendium on publicly available federal resources to assist communities impacted by goods movement activities, released in 2017. Guidance to help federal agencies incorporate environmental justice during their NEPA reviews, issued in March 2016, and guidance to communities about NEPA methods, issued in March 2019. A web page, which USDA compiled and launched in fiscal year 2017 with input and vetting from the Rural Communities committee, that provides links to community tools, funding opportunities, educational and training assistance, and case studies to support rural communities, according to USDA officials. However, we found that the working group’s organizational documents— the 2011 MOU, the working group’s 2011 charter, and the 2016-2018 Framework for Collaboration—do not provide strategic goals with clear direction for the committees to carry out the functions as laid out in the executive order. In September 2012, based on a government-wide study, we reported that collaborative mechanisms such as working groups benefit from clear goals to establish organizational outcomes and accountability. We reported that participants may not have the same overall interests or may even have conflicting interests, but by establishing a goal based on common interests, a collaborative group can shape its own vision and define its purpose. The working group has developed some documents with agreed-upon goals, which is beneficial to collaboration, but none of these documents address all seven functions of the executive order. In our September 2019 report, we compared the functions of the executive order to documented working group roles and responsibilities and found that coordinated data collection and examination of research and studies on environmental justice are not included in these documents or committee purposes and have not been a focus of the interagency working group since at least 2011. EPA officials said some agencies, such as HHS and EPA, have done work in environmental justice data collection and research. EPA officials told us that the 2011 MOU, committee groups, and Framework for Collaboration reflect the current priorities of the working group, based on public input. The officials were unsure whether a coordinated effort in the data collection, research, and studies areas was needed, but they said such an effort could be useful. They said that the most useful role of the working group in research might be as a forum for sharing information and providing training opportunities. In our September 2019 report, we recommended that EPA, as chair of the working group and in consultation with the working group, should clearly establish in its organizational documents strategic goals for the federal government’s efforts to carry out the 1994 executive order. EPA disagreed with this recommendation because it believes that the recommendation should be combined with a different recommendation we made about updating the MOU. We believe that EPA misunderstood our recommendation and therefore did not combine it with our other recommendation. We also found that member agencies’ participation in working group activities has been mixed. In the 2011 MOU, the 16 signing agencies and CEQ agreed to participate as members of the working group, such as by chairing, co-chairing, or participating in committees. Eleven of the 16 agencies have not chaired or co-chaired one of the working group’s committees, and four have not participated in any. Our government-wide work has shown that it is important to ensure the relevant participants have been included in a collaborative effort. EPA officials said it is difficult to characterize what specific opportunities are missed because of an agency’s lack of representation. However, they said that nonparticipation limits the working group’s ability to fulfill its mandates in a strategic, methodical way across the entire federal government. EPA officials also said that the limiting factor in the working group’s efforts to address the executive order has always been the will of leadership across the federal government to make clear, measurable commitments to those priorities and ensure adequate resources. We recommended that EPA, as chair of the working group and in consultation with the other working group members, update the 2011 MOU and renew the agencies’ commitments to participate in the interagency collaborative effort and the working group. EPA disagreed and said this recommendation could be combined with the recommendation to provide strategic direction for the working group. We continue to believe that EPA needs to update the MOU to address the matter of participation by the members who signed it but do not participate. In conclusion, incorporating environmental justice into federal agencies’ policies, programs, and activities is a long-term and wide-ranging effort. Federal agencies, led by EPA, have made some headway in developing tools and coordinated policies and have identified others that they need to pursue. Strategic planning and reporting, with meaningful measures, and collaboration across all agencies can help them make and track progress. Chairman Tonko, Ranking Member Shimkus, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Alfredo Gómez, Director, Natural Resources and Environment, at (202) 512-3841or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Susan Iott (Assistant Director), Allen Chan (Analyst in Charge), and Elise Vaughan Winfrey made key contributions to the testimony. Other staff who made contributions to this testimony or the report cited in the testimony were Peter Beck, Tara Congdon, Hannah Dodd, Juan Garay, Cindy Gilbert, Rich Johnson, Matthew Levie, Ben Licht, Cynthia Norris, Amber Sinclair, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Environmental justice seeks to address the disproportionately high distribution of health and environmental risks among low-income and minority communities by seeking their fair treatment and meaningful involvement in environmental policy. In 1994, Executive Order 12898 directed 11 federal agencies to incorporate environmental justice into their programs, policies, and activities. The executive order also directed the agencies to each establish an environmental justice strategy and created a working group of federal agencies, chaired by EPA, to coordinate federal environmental justice efforts. In 2011, these 11 agencies and five additional federal agencies signed a MOU agreeing to participate in federal efforts in this area as members of the Interagency Working Group on Environmental Justice and to issue annual progress reports on their efforts. This statement summarizes GAO’s findings from its September 2019 report on federal environmental justice efforts (GAO-19-543). Specifically, it focuses on (1) actions the working group agencies have taken to address environmental justice issues related to their missions, (2) the agencies’ progress in identifying and addressing environmental justice issues related to their missions, and (3) interagency working group efforts to help agencies coordinate federal environmental justice efforts under the executive order. To perform this work, GAO reviewed agency environmental justice plans, reports, and funding data; interviewed agency officials; and compared working group collaboration to leading collaborative practices. What GAO Found As GAO reported in September 2019, most of the 16 member agencies of the Interagency Working Group on Environmental Justice reported planning and implementing some actions to identify and address environmental justice issues, such as creating data tools, developing policies or guidance, and building community capacity through small grants and training. For example, the Environmental Protection Agency (EPA) created a mapping tool that can help identify low-income and minority communities exposed to health or environmental risks. Most of the agencies supported their efforts with funds and staff from related programs, but EPA and the Department of Energy provided funds (totaling $8.3 million in fiscal year 2018) and staff specifically for environmental justice. Agencies’ progress in identifying and addressing environmental justice issues related to their missions is difficult to gauge. Most of the agencies do not have updated strategic plans and have not reported annually on their progress or developed methods to assess progress, as they agreed to do by signing a 2011 memorandum of understanding (MOU). Of the 16 agencies that signed the MOU, 14 have issued strategic plans. However, although the MOU directs the agencies to update their strategic plans periodically, only six of these 14 agencies have done so since 2011. Furthermore, most of these 14 agencies have not consistently issued annual progress reports. In September 2019, GAO recommended that nine agencies develop or update their strategic plans and that 11 develop annual progress reports. Eight agencies agreed and one partially agreed, one agreed with one recommendation but disagreed with another, one disagreed, and three did not state if they agreed or disagreed. GAO also found that while four agencies, including EPA, have established performance measures or milestones for assessing progress toward goals, the other 12 have not done so. Agency officials said guidance from the working group on how to do so would be helpful. The 1994 executive order directs the working group to provide guidance to agencies in developing their environmental justice strategies, but the group has not provided specific guidance on what agencies should include in their strategic plans or on methods to assess and report on environmental justice progress. In September 2019, GAO recommended EPA develop such guidance or create a working group committee to do so, and EPA agreed. The interagency working group has coordinated to some extent but does not have a strategic approach, and member agencies are not fully participating. Specifically, the group’s organizational documents do not provide strategic goals with clear direction for the committees. Furthermore, 11 of the 16 signatory agencies have not chaired or co-chaired one of the committees, and four have not participated in any. In September 2019, GAO recommended EPA update the 2011 MOU and clearly establish strategic goals for federal efforts to carry out the executive order. EPA disagreed, but GAO continues to believe these actions are necessary. What GAO Recommends GAO made 24 recommendations in its September 2019 report, including that agencies update their environmental justice strategic plans and report on their progress annually. GAO recommended that EPA, as chair of the working group, develop guidance on assessing progress and what agencies should include in their strategic plans; coordinate with working group members to develop strategic goals for the group; and update the group’s memorandum of understanding. Of the 15 agencies with recommendations, eight agreed. Other agencies partially agreed, disagreed, or had no comment. GAO continues to support its recommendations.
gao_GAO-19-546
gao_GAO-19-546_0
Background In response to the 2014 access crisis, VA launched the MyVA initiative, which was designed to transform the health care experience of veterans. In concert with the MyVA initiative, VA introduced the MyVA Access Declarations in April 2016 with the goal of improving access by providing veterans more control as to how they receive their health care. The MyVA Access Declarations was a list of nine “access declarations” that were intended to serve as the foundational principles for improving and ensuring access to care. Two of these “access declarations” required providing timely primary and mental health care and included same-day services. VHA Policies on Same- Day Services VHA had policies in place for same-day services in primary and mental health care clinics for several years prior to the same-day-services initiative. In primary care, the 2014 Patient-Aligned Care Team (PACT) handbook required all primary care providers and registered nurses to ensure they provide same-day access (unless it is too late in the day as determined by the individual facility) for face-to-face encounters, telephone encounters and, when required by VHA guidance or policy, other types of encounters. The PACT handbook was supplemented by a 2015 VHA memo on unscheduled patient walk-ins. The memo states that if an unscheduled patient presents at a PACT clinic with a clinical concern, the patient cannot be turned away without evaluation by a clinical member of the team, regardless of clinic hours, resource availability, or eligibility/enrollment status. VHA also had previously developed policies stating that veterans are entitled to timely access to mental health care. Specifically, a 2007 VHA memo required that all veterans requesting or referred for mental health care or substance abuse treatment receive an initial evaluation within 24 hours. VHA’s 2015 Uniform Mental Health Services handbook also noted that all new patients requesting or referred for mental health care services must receive an initial evaluation within 24 hours and a more comprehensive diagnostic and treatment planning evaluation within 30 days. Additionally, since 2008, VHA has required the integration of primary care and certain mental health care services at VA medical centers serving a veteran population greater than 5,000. This care model, known as Primary Care–Mental Health Integration (PC-MHI), integrates mental health staff into each primary care PACT clinic, allowing veterans to receive services for depression, anxiety, post-traumatic stress disorder, and substance use without needing to obtain a separate referral to providers in the mental health care clinic. According to VHA guidance, PC-MHI has been shown to improve access to same-day mental health care and reduce no-show rates to appointments. Oversight of VHA Access to Care Efforts VHA’s veterans access to care office was created in 2016 as the national oversight office for VHA access-to-care issues. Additionally, each VISN is responsible for overseeing the VA medical centers within their designated regions. This oversight includes oversight of access issues and the implementation of initiatives such as the same-day service initiative. VA medical center directors are responsible for ensuring local policies are in place for the effective operation of their primary and mental health care clinics, including affiliated CBOCs. VHA Used a Five- Pronged Approach to Design Its Same-Day Service Initiative; Selected VA Medical Centers Relied on Previous Approaches to Implement It VHA Used a Five-Pronged Approach to Design and Set Up the Same-Day- Services Initiative VHA used a five-pronged approach to design its same-day services initiative: VHA (1) defined same-day services, (2) developed guidance, (3) updated its mental health policies, (4) offered training, and (5) assessed VA medical center readiness to implement the initiative. VHA defined same-day services. As an initial step, VHA leadership developed the following definitions of same-day services in primary and mental health care: Same-day services in primary care: “When a veteran requires primary care services right away, during regular business hours, he or she will receive services the same day at a VA medical center. If a veteran calls after normal business hours, he or she will receive care the next business day.” Same-day services in mental health: “If a veteran is in crisis or has another need for mental health care right away, he or she will receive immediate attention from a health care professional at a VA medical center.” VHA also identified a variety of ways in which veterans can receive same- day services, including: (1) providing a face-to-face visit; (2) returning a phone call; (3) arranging a telehealth or video care visit; (4) responding to a secure email; or (5) scheduling a future appointment. VHA developed guidance for the same-day service initiative. To help VA medical centers implement its definition of same-day services, in April 2016, VHA developed written guidance—the MyVA Access Implementation Guidebook. The guidebook provides a variety of solutions to help VA medical centers meet the intent of the same-day service initiative. The guidebook includes specific solutions for VA medical centers struggling to provide same-day services in primary or mental health care for veterans with urgent care needs: Implementing open access in primary and mental health care: Open access aims to balance the supply of (for example, available appointments) and demand for (for example, the number of patients assigned to a provider and annual visits per patient) services to increase patient access. Achieving open access requires implementing specific strategies including achieving full staffing, planning for contingencies such as clinical staff absences or vacancies and managing the number of times patients see a provider each year, among other strategies. Implementing primary care-mental health integration: In order to complete the implementation of PC-MHI across the VA system, the guidebook suggests facilities address staffing vacancies, develop a PC-MHI implementation plan, and choose an open access scheduling model (for example, full open access where there are no scheduled appointments and patients are seen on a first come, first served basis), among other things. Utilizing same-day referrals to mental health for suicide prevention: This solution reiterates many of the mental health policy changes that VHA introduced in conjunction with the same-day service initiative such as implementing an initial screening evaluation, developing a process for same-day care for established patients with an urgent need, and deploying open access scheduling, among other things. The guidebook states that all of the solutions were chosen because they were used successfully at other VA medical centers; can be quickly implemented; and have a high impact on veterans’ access to care. The guidebook also notes that flexibility is a key element when choosing solutions and explains that VA medical centers should select and modify solutions as needed. The guidebook does not make any of the solutions mandatory; however, several of the mental health solutions were introduced to facilities through separate VHA memos and are required. VHA updated mental health policies. VHA updated certain mental health policies to facilitate the implementation of the same-day services initiative. Specifically, in April 2016 VA issued a memo updating its mental health policy to require that any veteran new to mental health services requesting or referred for care in person be seen the same day by a licensed independent provider to screen for and address immediate care needs. This was a change from the previous timeframe of 24 hours for an initial evaluation. The memo also created new processes for VA medical centers to assess same-day services in mental health care clinics, including a medical chart review and a one-time review of standard operating procedures to ensure that the new guidelines are being followed. VHA also distributed other memos that either sought to clarify existing guidance or expand same-day services into other areas of mental health care, such as substance abuse. Additionally, VHA provided a memo to VA network directors and mental health leads about scheduling models for mental health care that all VA medical centers needed to implement for the same-day service initiative. VHA provided training on the same-day-services initiative. VHA provided voluntary training for same-day services some of which discussed the solutions from the guidebook and the updated mental health policy. The trainings began in February 2016 for primary care and in May 2016 for mental health. The trainings consisted of national telephone calls (often with slide presentations) that any VA medical center staff member could join, and the presentation materials were posted to VHA’s internal website. The telephone trainings generally occurred twice a month in primary care and every week in mental health care. VHA assessed VA medical center same-day service readiness. Beginning in January 2017, VHA provided technical assistance around same-day services to VA medical centers. VHA reviewed several aspects of same-day services, including how VA medical centers were able to provide same-day services and identified any approaches that may have needed improvement. Generally, low-performing VA medical centers received continuous on-site support; moderate performing VA medical centers received a combination of virtual and on-site support; and, high performing VA medical centers primarily received virtual support. To determine the progress that VA medical centers were making in providing same-day services, VHA conducted surveys that required medical center directors to self-certify—and, in some cases, VISN directors to validate—that their VA medical centers (including affiliated CBOCs) were able to provide same-day services. In the event that a VISN director could not validate medical center survey information, VHA followed up with the medical center and VISN director to create an action plan to mitigate any issues that were delaying validation. These surveys were conducted in 2016 and 2017; focused on either primary care, mental health care or both; and varied in the information collected to determine how VA medical centers were providing same-day services (See Table 2 for information on the same-day-services readiness assessment surveys used by VHA). According to VHA, all VA medical centers were offering same-day services in primary and mental health care by December 2016. In January 2018, VHA announced that same-day services in primary and mental health care had been achieved in all VA medical centers and CBOCs (more than 1,000 facilities). Selected VA Medical Centers Generally Relied on Previous Approaches to Implement the Same- Day-Services Initiative Officials we spoke with from all six VA medical centers in our review told us they were providing same-day services in primary and mental health care prior to the same-day service initiative, an assertion supported by VHA survey data. For example, in a VHA survey conducted in May 2016, around the same time as the launch of the same-day service initiative, 142 out of 165 officials (86 percent) that responded to the survey said that their medical centers offered same-day appointments “always” or “very frequently” in primary care for urgent concerns. We found that the VA medical centers in our review used a variety of approaches in providing same-day services in primary and mental health care, most of which were in existence before the initiative. As noted earlier, VHA did not require the implementation of any specific solutions in the guidebook and afforded VA medical centers the flexibility to choose appropriate local solutions for the same-day service initiative. Many VA medical centers used this flexibility to continue providing same-day services as they had prior to the initiative often because that is what their resources allowed them to do or, in the case of mental health, because it was built into the foundation of their service line. VHA officials noted that mental health services—particularly PC-MHI—were built around same- day services so VHA’s guidance was familiar to them. The approaches used by the selected VA medical centers included using “float providers” who had not already been assigned specific patients to assist those who requested same-day services; carving out specific appointment times in the schedule for walk-ins; overbooking appointments in providers’ schedules, and offering walk-in clinics. VHA suggested that certain solutions should be prioritized if VA medical centers were struggling to provide same-day services and, in particular for mental health, created new requirements around same-day services. However, officials at selected VA medical centers noted that some of the suggested solutions in the guidebook—particularly open access—and requirements in updated mental health policies were difficult to implement because of longstanding challenges with staffing, space, or competing VHA policies. For example, VHA’s guidebook suggests the implementation of open access in primary and mental health care in such situations. However, officials at four of the six VA medical centers we visited noted that open access was difficult to implement because of the long-standing challenges mentioned above. In addition, VHA updated its mental health policy to include that any veteran new to mental health services requesting or referred for care in person be seen the same day by a licensed independent provider to screen for and address immediate care needs. However, one medical center official noted that they had designed their mental health clinic processes around registered nurses, who are responsible for completing the initial assessments of new patients. The official added that the medical center did not have licensed independent providers readily available at certain facilities to help complete the assessments in a timely manner. Officials at all six medical centers we visited noted that implementation was also sometimes challenging as veterans’ expectations shifted with the same-day-services initiative, with veterans’ expecting more immediate access to care from physicians for a variety of conditions. For example, one medical center official noted that veterans are presenting for care and wanting to see a provider because it is these veterans’ understanding that they could get care immediately for any condition including chronic, less urgent issues. Additional officials at the same facility echoed this concern and noted that they are not certain that this was the policy’s intent. Another medical center official noted that several medical center officials asked VHA to change the name “same-day service” because it gives the impression that veterans would always be able to see their provider immediately. This official added that there is some confusion for both staff and veterans about what are same-day services. Additionally, according to one veterans service organization official that we spoke with, a small number of veterans reported that the availability of same-day services varied by facility (VA medical center versus CBOC) and location (urban versus rural). Another medical center official noted that same-day services are not sustainable if the definition is immediate care by a provider for any condition, especially non-urgent issues. VHA officials told us that the same-day service initiative was a response to the 2014 access crisis and they wanted facilities to use the resources available to them rather than waiting on new policies and strategies. They stated that their main concern was that veterans’ needs were met, not necessarily how they were met. As such, VHA officials told us that they found VA medical centers’ implementation of same-day services acceptable. The VHA officials added that the guidebook is still the foundational document for same-day services. VHA officials told us that it is important for VA medical centers to educate patients on the appropriate use of same-day services. They added that in fiscal year 2019 they are (1) developing a more precise definition of same-day services; (2) developing a website to better explain the purpose of the initiative; and (3) requiring on-demand trainings to provide a clearer explanation about what same day services are available and what staff roles and responsibilities are, among other things. The training is expected to be completed no later than the first quarter of fiscal year 2020. VHA Has Not Documented Objectives or Developed Performance Goals and Related Measures to Assess the Impact of Same- Day Services on Veterans’ Access to Care VHA is limited in its efforts to assess the impact of same-day services due to its lack of documented objectives, developed performance goals and related performance measures. Our previous work has shown the benefit of fully connected objectives and performance goals with measurable targets. Objectives state the longer term desired impact or outcome to be achieved, while performance goals communicate the target the agency seeks to achieve within a certain timeframe. Performance measures are indicators of the progress the agency is making towards a goal or target within a particular time frame. VHA officials told us that the overall objectives of same-day services are to improve veterans’ access to care and customer service while having minimal impact on medical centers’ existing workflows. However, VHA has not documented these objectives—for example, in a directive. In addition, VHA has not developed and documented performance goals that, with associated performance measures, would facilitate monitoring of progress towards the desired outcome of the same-day services initiative. VHA officials stated that the same-day-services initiative was developed quickly in response to the 2014 access crisis, and noted that at the time, their focus was “to get something out quickly” instead of taking time to standardize the initiative around specific policies and procedures, which could include documenting objectives and developing performance goals. VHA officials acknowledged that their decision to focus on quickly implementing the initiative without documenting objectives and developing performance goals and associated performance measures makes assessing the impact of the same-day services initiative more challenging. VHA has taken some steps to collect data on same-day services. For example, VHA officials stated that they primarily rely on two measures to assess the impact of the same-day services initiative: patient experience scores and the number of same-day appointments. However, without performance goals these measures do not provide VHA with a means to monitor progress and provide limited information on same-day services’ impact. Patient experience score: VHA uses the Survey of Healthcare Experiences for Patients (SHEP) to measure veterans’ perceptions of their experience at VA medical centers. For same-day services, VHA monitors responses to two questions. According to VHA officials, the key measure is based on the survey question that asks “in the last 6 months, when you contacted this provider’s office to get an appointment for care you needed right away, how often did you get an appointment as soon as you needed?” While SHEP scores provide some data related to customer service and access to care, VHA has not developed performance goals that sets targets for these or other aspects of the same-day services initiative that would benefit from monitoring. Such goals would better enable VHA to identify gaps in performance and plan any needed improvements; ensure balance between agency priorities, such as customer service and access; and identify unintended effects, such as disruption to clinic workflows. For example, officials at one medical center told us that focusing on customer service creates issues with respect to routine care in that veterans’ definition of customer service is based on what makes them happy, while providers are focused on providing the best treatment. Officials added that these two definitions do not always align. In addition, officials at another medical center stated that implementing same-day services impacted their providers’ schedules and the resulting changes to their processes created chaos. Number of same-day appointments: VHA measures the number of same-day appointments, which, according to a VHA official, are identified in VHA data as appointments completed on the same day they are created in VHA’s scheduling system. According to a VHA training document, VA completed 12 million same-day appointments in fiscal year 2018. However, without performance goals with clear targets for same-day appointments, an official from one VISN said she was unclear how many same-day appointments medical centers should be scheduling. Additionally, same-day services performance goals may afford VHA the opportunity to monitor other key measures—such as those that capture services that do not require an appointment—which could provide VHA with important information on the impact of same-day services on access to care. Moreover, performance goals and additional performance measures may help prevent unintended consequences, such as an over-emphasis on same-day appointments as the way to provide same-day services, which VHA officials stated they are working to curb. For example, officials at two selected medical centers also noted that measuring the number or proportion of same-day appointments does not capture all the ways medical centers provide same-day services. Officials at two other selected medical centers noted they can meet veterans’ same-day needs through multiple avenues, such as a registered nurse providing patient education or by renewing a prescription, that do not require an appointment and therefore, would not be counted in the number of same-day appointments. VHA officials stated that the impact of the same-day services on access to care is difficult to measure and additional measures would help properly measure the impact. VHA’s lack of documented objectives and developed performance goals and related measures is inconsistent with our prior work on effective management practices and federal internal control standards. Specifically, we have previously reported that performance measures benefit from certain key practices, such as breaking down of broad long- term objectives into specific near-term performance goals with measurable targets and time frames, and key attributes, such as balance to prevent skewed incentives over-emphasizing certain goals. Additionally, Standards for Internal Control in the Federal Government states that documentation provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Without clearly documented objectives, performance goals, and related performance measures, VHA is hindered in its efforts to define success for its same-day service initiative and measure progress achieving it. VHA officials stated they rely on VISN and VA medical center officials to oversee same-day services; however, we found that without performance goals and related performance measures, VISNs and VA medical centers found it challenging to oversee the same-day services initiative. Specifically, officials at five of the six medical centers and two of the four VISNs we visited stated that it is difficult to measure same-day services; which in turn makes assessing the initiative’s impact on veterans’ access to care difficult. Officials at one medical center explained that the challenge stems from the fact that that VHA has not defined what outcome it wants to achieve. In addition, officials at another VA medical center stated that they have a number of access measures available to them, but it was unclear to them which measures they should be prioritizing as part of their oversight of the same-day services initiative. Further, absent performance goals, we found that VISNs and medical centers, which operate in a decentralized environment, varied in their oversight strategies. For example, one VISN required all medical centers to complete a self-assessment of their access capacity and sustainability, and collected information on a number of key open access elements, including Patient-Aligned Care Team staffing levels and provider panel sizes, among others. However, oversight by other VISNs was reportedly less robust. For example, at one VISN, officials stated it is difficult to audit access broadly and described their oversight of same-day services as “fairly minimal.” At the medical center level, oversight also varied as officials tried to develop their own oversight solutions. Officials at one medical center we visited used a feature within the outpatient appointment scheduling system that allowed them to count the specific services, such as pharmacy refills, that veterans seeking same-day mental health care had requested. According to these officials, the tool provided additional data not found in existing VHA access-related reports and allowed them to better understand veterans’ demand for specific same-day services and utilize resources more efficiently. These officials added that they developed this solution because they had not received guidance from VHA on how they should measure demand, and they had skilled staff with the ability to develop their own measures. However, not all VA medical centers we visited had the skilled staff to develop similar solutions. Developing performance goals and related performance measures would better position VHA to obtain useful, comparable information on the impact of same-day services on access to care across VISNs and medical centers. Moving forward, VHA is planning to conduct a “mystery shopper” evaluation of same-day services to assess the impact of same-day services. The mystery shopper evaluation will consist of various scenarios in which veterans, engaged through a contractor, will attempt to access same-day care at a variety of clinics in VA medical centers. As described in a VHA planning document, the evaluation is intended to provide VHA with information on veterans’ experience in obtaining same-day services and will attempt to understand variations in how same-day services are provided. However, VHA officials have not determined if the evaluation will be ongoing. VHA officials stated that in addition to the mystery shopper evaluation, they are considering additional measures to better assess the impact of same-day services beyond their current measures, such as the number of pharmacy refills completed the same day they were requested. However, as of May 2019, VHA had not developed specific performance goals to align these measures to, or set timeframes for their creation. Without overall performance measures that are tied to documented performance goals, VHA will continue to be limited in its ability to assess the impact of same-day services on veterans’ access to care. Conclusions VHA’s same-day services initiative for primary and mental health care is one of several efforts by VHA to help improve veterans’ access to care in the 5 years since access issues garnered national attention. VHA’s stated objectives for the same-day-services initiative are to improve veterans’ access to care and customer service while having minimal impact on medical centers’ existing workflows. However, VHA has not documented these objectives or developed performance goals and related measures that provide for monitoring towards the desired outcomes. VHA primarily relies on veteran satisfaction scores and the number of same-day appointments to monitor the same-day-services initiative, but these measures alone do not enable an assessment of the impact of same-day services on access to care. Without documented objectives, and performance goals and related measures tied to these goals, VHA will continue to be limited in its ability to determine, how, if at all, the same- day-services initiative has improved veterans’ access to care. Recommendation for Executive Action The Under Secretary for Health should document same-day services objectives and develop performance goals and related performance measures to facilitate the periodic assessment of the impact of same-day services on veterans’ access to care. (Recommendation 1) Agency Comments We provided a draft of this report to VA for review and comment. In its written comments, which are reproduced in appendix I, VA concurred in principle with our recommendation. VA stated that its Office of Veterans Access to Care will clarify objectives, develop performance goals, and explore the options for reliable performance measures. VA noted that identifying options for performance measures will take approximately 9 months and that additional time may be needed for development, testing and refinement. VA provided a target completion date of April 2020. We are sending copies of this report to the appropriate congressional committee and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at DraperD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Ann Tynan (Assistant Director), Dan Klabunde (Analyst-in-Charge), Jennie F. Apter, and Q. Akbar Husain made key contributions to this report. Also contributing were Muriel Brown, Jacquelyn Hamilton, Ethiene Salgado-Rodriguez, and Merrile Sing.
Why GAO Did This Study In 2014, a series of congressional testimonies highlighted problems with veterans' access to care after significant appointment wait times at VA medical centers reportedly resulted in harm to veterans. In response, VHA implemented several initiatives, including same-day services at its medical centers and outpatient clinics. GAO was asked to review the same-day services initiative and VHA's related oversight activities. This report (1) describes how VHA designed and how selected medical centers implemented the same-day services initiative; and (2) examines VHA's efforts to assess the impact of the same-day services initiative on veterans' access to care. GAO reviewed VHA documents, including policies, guidance, and requirements related to same-day services and interviewed VHA officials regarding implementation and oversight. GAO visited six VA medical centers selected for the complexity of services offered, range of wait times, and geographic variation, among other factors. GAO interviewed officials from (1) the six VA medical centers and affiliated outpatient clinics, (2) VHA's networks with oversight responsibility, and (3) two veterans service organizations. What GAO Found The Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) introduced its same-day services initiative in primary and mental health care in April 2016, and used a five-pronged approach for its design: it defined same-day services, developed guidance, updated its mental health policies, offered training, and assessed VA medical center readiness to implement the initiative. Officials from all six VA medical centers GAO visited said they already were providing same-day services prior to the initiative and generally relied on previous approaches to implement VHA's same-day-services initiative. However, these officials told GAO that some of VHA's guidance and updated policies were difficult to implement due to long-standing challenges of staffing and space constraints, among others. For example, one medical center official stated that the medical center did not have the appropriate providers readily available to complete the initial mental health assessments of new patients in a timely manner—a new requirement under VHA's updated policies. VHA officials stated that the objectives of the same-day services initiative are to improve veterans' access to care and customer service. However, VHA has not documented these objectives in a directive or developed and documented performance goals that, with associated performance measures, would monitor progress. Although VHA does monitor patient experience scores and the number of same-day appointments, these measures are not tied to specific performance goals. For example, VHA has not specified targets for the number of same-day appointments medical centers should provide. Furthermore, monitoring the number of same-day appointments does not capture all of the ways VA medical centers provide same-day services, such as renewing prescriptions. VHA officials acknowledged the intitiative was quickly developed in response to the 2014 access crisis, and developing new policies or processes, which could include documenting objectives and developing performance goals, was not the priority. Without performance goals and related measures, VHA will continue to be limited in its ability to determine, how, if at all, the same-day services initiative has improved veterans' access to care. What GAO Recommends GAO recommends that VA document objectives and develop performance goals and related performance measures to facilitate the periodic assessment of the impact of same-day services on veterans' access to care. VA agreed with GAO's recommendation.
gao_GAO-19-598
gao_GAO-19-598_0
DOD’s Efforts to Implement Section 911 Requirements Have Largely Stalled, and Funding Delays Have Slowed DOD’s Newest Cross- Functional Team DOD Has Continued to Delay Full Implementation of Section 911 Requirements DOD is up to 21 months late in fully addressing five remaining requirements of section 911 related to DOD’s organizational strategy and cross-functional teams, as shown in figure 1 and discussed below. Specifically, DOD has not fully addressed the following statutory requirements: 1. Issue an organizational strategy: DOD has not issued its organizational strategy, which as of June 2019 is 21 months past the statutorily required issuance date of September 1, 2017. In January 2019, we reported that OCMO officials had revised the draft organizational strategy, incorporating, among other things, the criteria that distinguish cross-functional teams established under section 911 from other types of cross-functional working groups, committees, integrated product teams, and task forces, as required by section 918(b) of the John S. McCain NDAA for Fiscal Year 2019. The revised draft of the organizational strategy also includes steps DOD plans to take to advance a collaborative culture. As we reported in our June 2018 report, these steps, as outlined in the draft strategy, align with our leading practices for mergers and organizational transformations, which we recommended that DOD incorporate into its strategy. Based on our review of OCMO’s current draft of the organizational strategy, we found that it addresses all required elements laid out in section 911 of the NDAA for Fiscal Year 2017. That January 2019 draft strategy, according to an official from OCMO’s Administration and Organizational Policy Directorate, was provided to OCMO leadership for review as early as August 2018, but has not been approved. A senior OCMO official stated that approval of the draft was delayed to ensure it aligned with the National Defense Strategy, issued in January 2018, and the National Defense Business Operations Plan, issued in May 2018, and to incorporate additional requirements of the John S. McCain NDAA for Fiscal Year 2019, which was enacted in August 2018. In addition, according to senior OCMO and Office of the Deputy Secretary of Defense officials, the Acting CMO and the Deputy Secretary of Defense informally discussed the draft organizational strategy, but those conversations did not lead to the Acting CMO formally approving the draft for department-wide coordination. In May 2019, a senior OCMO official told us that the Acting CMO was fully committed to completing department-wide coordination of the draft strategy in June 2019 and advancing it for issuance by the Secretary of Defense in July 2019. After providing a draft of this report to the department for comment, we learned that the organizational strategy was circulated for department-wide coordination on July 12, 2019, with components expected to provide input by August 5, 2019. 2. Issue guidance for cross-functional teams: DOD has not issued guidance for cross-functional teams, which, as of June 2019, is 20 months past the required date of September 30, 2017. In June 2018, we reported that OCMO officials had revised the draft guidance to fully address all section 911 requirements and incorporate leading practices for effective cross-functional teams in the guidance, consistent with our February 2018 recommendation. Based on our review of this draft, we found that it addresses all required elements from section 911 of the NDAA for Fiscal Year 2017, as well as all of the leading practices for effective cross-functional teams. That draft guidance, according to an official from OCMO’s Administration and Organizational Policy Directorate, was provided to OCMO leadership for review as early as August 2018, but has not been approved by the CMO. 3. Provide training to cross-functional team members and their supervisors: OCMO officials have provided some of the required training to members and leaders of a recently established cross- functional team described later in this report. The training included several required elements, including information on the characteristics of successful cross-functional teams, conflict resolution, and how to appropriately represent the views and expertise of functional components. However, OCMO officials have not provided training to supervisors in team members’ functional organizations as required. We reported in February 2018 that DOD had developed a draft curriculum for this training that addressed the section 911 requirements. An OCMO official told us it has not altered the curriculum since then, but that the department has still not provided the training to team members’ supervisors because the curriculum has not been approved by the Acting CMO or the Secretary of Defense. Such approval, though not required by statute, would demonstrate senior leadership support for cross-functional teams, a leading practice we have identified. Further, according to an OCMO official, department-wide coordination and approval would serve to strengthen the effectiveness of the training. However, the need for this training is evident. For example, when we observed one of the training sessions, a member of a cross-functional team stated that he did not believe his supervisors knew what cross-functional teams were. 4. Provide training to presidential appointees: OCMO has not provided the required training to individuals filling presidentially appointed, Senate-confirmed positions in the Office of the Secretary of Defense. Section 911 requires these individuals to complete the training within 3 months of their appointment or DOD to request waivers. As of June 2019, 24 of 36 such officials had been appointed and in their positions for more than 3 months, and, according to an OCMO official, none had received their training or been granted a training waiver. An OCMO official told us in October 2018 he had revised the draft training curriculum following our February 2018 report to include all the required elements in section 911. However, as of May 2019, OCMO officials had not provided a copy of the revised curriculum for our review. After the curriculum is approved, the officials stated that they plan to recommend to the Secretary of Defense that all presidential appointees in the Office of the Secretary of Defense receive the training, rather than request waivers. 5. Report on successes and failures of cross-functional teams: OCMO has not completed an analysis of the successes and failures of DOD’s cross-functional teams, which, as of June 2019, is 3 months past its required completion date. Section 911 requires that an analysis of the success and failures of the teams and how to apply lessons learned from that analysis is completed 18 months after the establishment of the first cross-functional team. With the establishment of the first cross-functional team on personnel vetting in August 2017, the required completion date for the report was February 25, 2019. An OCMO official stated that OCMO planned to conduct an analysis on the personnel vetting team, but had not yet begun and had not set a time frame for doing so. DOD has not addressed most of these remaining requirements of section 911 because, according to an OCMO official, the Acting CMO has not approved the draft documents prepared by OCMO staff to satisfy the requirements. Moreover, the Acting CMO has not coordinated most of the documents department-wide and provided them to the Secretary of Defense for review and issuance. Specifically, according to an OCMO official, the Acting CMO has not reviewed or approved the guidance on cross-functional teams or curricula for cross-functional team members, their supervisors, and presidential appointees. These delays occurred in part because the department has not established and communicated internal deadlines for reviewing, coordinating, and approving these documents. According to OCMO officials, the primary reason they have not met these other outstanding requirements, including the guidance and training for cross-functional teams, is that they would like to have the organizational strategy approved and issued first, so that it can be reflected in the accompanying materials. However, while the OCMO has set an internal time frame for the organizational strategy, it has not set similar time frames for completing the remaining requirements. Standards for Internal Control in the Federal Government emphasize the need to establish time frames to implement actions effectively. In addition, as we reported in June 2018, establishing time frames with key milestones and deliverables to track implementation progress are important for agency reform efforts. By not setting and following clear internal deadlines for meeting the outstanding section 911 requirements, DOD has continued to fall short in meeting statutory requirements and missed opportunities to effectively implement its cross-functional teams and advance a collaborative culture that could bolster broader efforts within the department, such as reforming its business operations. DOD Established a Cross- Functional Team on Electromagnetic Spectrum Operations, but the Team’s Efforts Have Been Slowed by Delayed Funding Decisions Sections 918 and 1053(c) of the John S. McCain NDAA for Fiscal Year 2019 required the Secretary of Defense to establish a cross-functional team pursuant to section 911 of the NDAA for Fiscal Year 2017 on electronic warfare to identify gaps in electronic warfare and joint electromagnetic spectrum operations, capabilities, and capacities within the department across personnel, procedural, and equipment areas. In addition, section 1053(d) of the act required the electronic warfare cross- functional team to, among other things, (1) update the department’s Electronic Warfare Strategy in coordination with the Electronic Warfare Executive Committee by February 9, 2019, and (2) provide assessments of the electronic warfare capabilities of the Russian Federation and the People’s Republic of China in consultation with the Director of the Defense Intelligence Agency by May 10, 2019. Section 918 of the John S. McCain NDAA for Fiscal Year 2019 required the team’s establishment by November 11, 2018; however, DOD did not establish an electromagnetic spectrum operations cross-functional team until February 2019, and the team did not begin its work until April 2019. An official from the team told us that the standup of the team was delayed due to the extensive department-wide review of the February 2019 memorandum that established the team. Because of the delayed establishment of the team, DOD officials estimated that the required update to DOD’s Electronic Warfare Strategy would be completed by the end of September 2019—7 months after the statutory deadline—and that the required assessments would be provided by fall 2019. According to the team’s establishment memorandum, the team will continue its work until at least fiscal year 2022. In addition to the requirements discussed above, section 911 of the NDAA for Fiscal Year 2017 includes specific requirements for cross- functional teams established under that section, including that each team’s objectives be clearly established in writing and that the team should establish a strategy to achieve those objectives. We found that DOD and the electromagnetic spectrum operations cross-functional team have addressed 10 of 11 of those requirements for cross-functional teams. We also found that the team demonstrates several of the leading practices for cross-functional teams. For example, we found that the team has a well-defined team structure and well-defined team goals. However, as previously discussed, DOD has not fully addressed the section 911 requirement for training for cross-functional team members’ supervisors. We were also told by team officials that DOD was delayed in providing administrative support and funding to support the team’s operations. According to the memorandum establishing the electromagnetic spectrum operations team, the CMO is responsible for providing administrative support to the new team, to include providing the team with office space, information technology equipment, contracting, human resources, security, cross-functional team training, and other services, as appropriate. The memorandum also requires the team to work with the CMO to develop resource requirements for team operations for fiscal years 2019 and 2020 to ensure adequate resources are immediately available. However, according to a team official, funding was not provided to the team until late May 2019—over 3 months after the team was established and over 1 month after most of the team members were provided by their home units to work on the team full time. According to a team official, this funding was to be used for several team requirements, including dedicated office space, computer systems, travel funds, and contractor support. This funding was delayed in part because of disagreements over responsibility for funding the team under the terms of the memorandum establishing the team. Specifically, according to a team official, OCMO officials believed that funding should be provided by another organization, such as the Joint Staff. Team and Joint Staff officials told us that they believed the OCMO was responsible for this funding based on the memorandum establishing the team. A team official further stated that funding was provided only when the Deputy Secretary of Defense directed that funding be provided to the team. OCMO officials told us that because the team was not a budgeted activity for fiscal year 2019, the team was added to DOD’s unfunded requirements list. The Under Secretary of Defense (Comptroller) identified funds for the team via the unfunded requirements process at the end of April 2019. However, a team official told us funding for the team for future fiscal years has not been identified and responsibility for providing that funding is still unclear. OCMO officials told us that the team will continue to rely on the unfunded requirements process for funding, since the team is not a budgeted activity for fiscal year 2020, and would need to compete for funding through DOD’s program budget review process for fiscal year 2021 and later fiscal years. Those officials also told us that the team has not yet signed a memorandum of agreement that is required to execute transfer of the funds to the team. A team official told us the team had not yet signed the memorandum because it believed the memorandum would transfer responsibility for funding the team from OCMO to the team. As noted previously, team officials believe the OCMO is responsible for this funding based on the memorandum establishing the team. According to a team official, this delay in funding hampered the team’s ability to achieve full operating capability. For example, until late May the team was working from the Pentagon Conference Center and OCMO conference rooms with only one secure laptop. A team official told us in June 2019 that though the team has moved into its own office space, that space does not have the level of security required for the team to work on a third of its initiatives. As a result, the team was also delayed in conducting mission analysis, work plan development, organizational design, and production of executive-level briefings. A team official told us the team expects to be at full operating capability in late July 2019. Leading practices for implementing effective cross-functional teams highlight the importance of senior management providing teams with access to resources. In addition, Standards for Internal Control in the Federal Government state that agencies’ management should assign responsibility to achieve the entity’s objectives. If DOD does not clarify roles and responsibilities for providing funding for the new cross- functional team, the Acting CMO and the electromagnetic spectrum operations team may continue to have delays in funding and those delays may negatively affect the team’s ability to conduct its work and to meet its objectives. Conclusions Section 911 of the NDAA for Fiscal Year 2017 called for organizational and management reforms to assist DOD in addressing challenges that have hindered collaboration and integration across the department. The department has taken some steps to implement the section 911 requirements, but still has not met statutory time frames for implementing key requirements intended to support its cross-functional teams and to advance a more collaborative culture within the department. Setting specific internal deadlines would help ensure action on these outstanding statutory requirements. Moreover, DOD has established a new electromagnetic spectrum operations cross-functional team under section 911—one of the only requirements for which the department has made progress since our last report—but has not ensured that the team will have the funding it needs beyond fiscal year 2019 to maintain full operational capability and accomplish its assigned objectives. Senior leadership commitment to fully supporting this team and fulfilling all section 911 requirements could help the department make important advances in the type of collaboration necessary for the department to accomplish some of its most ambitious goals. Recommendations for Executive Action We are making the following six recommendations to DOD: The Secretary of Defense should ensure that the CMO meets DOD’s August 2019 deadline for final submission of the organizational strategy to the Secretary of Defense for review and issuance. (Recommendation 1) The Secretary of Defense should ensure that the CMO meets DOD’s September 2019 deadline for review and approval of DOD’s guidance on cross-functional teams and final submission to the Secretary for review and issuance. (Recommendation 2) The Secretary of Defense should ensure that the CMO meets DOD’s September 2019 deadline for review and approval of DOD’s training curriculum for cross-functional team members and their supervisors. (Recommendation 3) The Secretary of Defense should ensure that the CMO meets DOD’s September 2019 deadline for review and approval of DOD’s training curriculum for presidential appointees. (Recommendation 4) The Secretary of Defense should ensure that the CMO meets DOD’s November 2019 deadline for drafting, review, and approval of DOD’s report on the success and failures of cross-functional teams and final submission to the Secretary for review and approval. (Recommendation 5) The Secretary of Defense should ensure that the CMO and the electromagnetic spectrum operations cross-functional team clarify roles and responsibilities for providing administrative support and funding for the team beyond fiscal year 2019 in accordance with the memorandum establishing the team. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In written comments that are reproduced in appendix IV, DOD concurred with our recommendations. DOD officials provided separate oral technical comments, which we incorporated as appropriate. In its response, DOD provided new information on a timeline for completing the outstanding section 911 requirements. Specifically, DOD updated its internal deadline for submission of the organizational strategy to the Secretary of Defense from July 2019 to August 2019. DOD also stated that it plans to issue the guidance on cross-functional teams and training for cross-functional team members, their supervisors, and presidential appointees by September 2019 and complete its report on the successes and failures of cross-functional teams by November 2019. We updated our first five recommendations to reflect this information. Establishing these timelines is an important step forward in meeting the statutory requirements under section 911 as well as addressing our recommendations. As part of our next and final audit of DOD’s implementation of section 911 requirements, we will assess the extent to which the department has met these new internal deadlines and fully addressed our recommendations in this report. Fully addressing these outstanding requirements will strengthen DOD’s ability to effectively implement its cross-functional teams and advance a collaborative culture within the department. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and DOD’s Deputy Chief Management Officer. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2775 or fielde1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Prior GAO Reports on the Department of Defense’s (DOD) Implementation of Section 911 of the National Defense Authorization Act (NDAA) for Fiscal Year 2017 Section 911 of the NDAA for Fiscal Year 2017 included a provision for us—every 6 months after the date of enactment on December 23, 2016, through December 31, 2019—to submit to the congressional defense committees a report. Each report is to set forth a comprehensive assessment of the actions that DOD has taken pursuant to section 911 during each 6-month period and cumulatively since the NDAA’s enactment. Table 1 identifies our four prior reports on DOD’s implementation of section 911 and the status of the five recommendations from those reports. Appendix II: Summary of Requirements in Section 911 of the National Defense Authorization Act for Fiscal Year 2017 Section 911 of the National Defense Authorization Act for Fiscal Year 2017 requires the Secretary of Defense to take several actions. Table 2 summarizes these requirements, the due date, and the date completed, if applicable, as of June 2019. Appendix III: Leading Practices for Implementing Effective Cross-Functional Teams In February 2018, we reported on eight leading practices for implementing effective cross-functional teams. Table 3 identifies these leading practices and their related key characteristics. Appendix IV: Comments from the Department of Defense Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Margaret Best (Assistant Director), Tracy Barnes, Arkelga Braxton, Sierra Hicks, Michael Holland, Matthew Kienzle, Amie Lesser, Ned Malone, Judy McCloskey, Sheila Miller, Richard Powelson, Daniel Ramsey, Ron Schwenn, and Sarah Veale made key contributions to this report.
Why GAO Did This Study DOD continues to confront organizational challenges that hinder collaboration. To address these challenges, section 911 of the NDAA for Fiscal Year 2017 directed the Secretary of Defense to, among other things, issue an organizational strategy that identifies critical objectives that span multiple functional boundaries; establish cross-functional teams to support this strategy; and provide related guidance and training. The NDAA for Fiscal Year 2017 also included a provision for GAO to assess DOD's actions in response to section 911. This report assesses the extent to which DOD has made progress in implementing the requirements of section 911, including establishing a new cross-functional team on electromagnetic spectrum operations. GAO reviewed documentation, interviewed cross-functional team members and other DOD officials, and compared DOD's actions to section 911 requirements and leading practices for cross-functional teams. What GAO Found The Department of Defense (DOD) is up to 21 months late in fully addressing five of seven requirements of section 911 of the National Defense Authorization Act (NDAA) for Fiscal Year 2017. These remaining five requirements are designed to strengthen collaboration within the department to foster effective and efficient achievement of objectives and outputs (see figure). DOD has not addressed most of these remaining requirements of section 911 largely because the Chief Management Officer (CMO) has not approved the documents drafted to meet the requirements or coordinated department-wide review of the documents and provided them for Secretary of Defense issuance. According to Office of the CMO (OCMO) officials, some of the draft documents were provided to the CMO for review and approval as early as August 2018. After providing a draft of this report to the department for comment, GAO learned that the organizational strategy was circulated for department coordination in July 2019, with components expected to provide input by August 2019. However, while the OCMO has set an internal time frame for the organizational strategy, it has not set similar time frames for completing the other four remaining requirements, such as delivering guidance and training on cross-functional teams. GAO previously reported that establishing internal deadlines with key milestones and deliverables is important for tracking progress and implementing actions effectively. DOD established a cross-functional team pursuant to section 911 on electromagnetic-spectrum operations (EMSO), but according to a team official, funding for the team was delayed. EMSO refers to those activities consisting of electronic warfare and joint electromagnetic-spectrum management operations used to exploit, attack, protect, and manage the electromagnetic operational environment to achieve the commander's objectives. According to the memorandum establishing the team, the CMO is required to provide administrative support to and coordinate with the team to ensure adequate resources are immediately available. However, team officials stated that this funding was delayed in part because of disagreements over responsibility for funding the team under the terms of this memorandum. Moreover, according to a team official, plans for funding in future fiscal years have not been developed. If DOD does not clarify roles and responsibilities for funding the team, the CMO and the EMSO team may face additional delays securing funding, which could negatively affect the team's ability to conduct its work and meet its objectives. What GAO Recommends GAO is making six recommendations, including that DOD set and ensure that it meets specific internal deadlines for review and approval of outstanding requirements of section 911, and that DOD clarify roles and responsibilities for providing funding for the EMSO cross-functional team. DOD concurred with GAO's recommendations and set deadlines for addressing the remaining requirements.
gao_GAO-20-170SP
gao_GAO-20-170SP_0
Background To help manage its multi-billion dollar acquisition investments, DHS has established policies and processes for acquisition management, requirements development, test and evaluation, and resource allocation. The department uses these policies and processes to deliver systems that are intended to close critical capability gaps, helping enable DHS to execute its missions and achieve its goals. Acquisition Management Policy DHS policies and processes for managing its major acquisition programs are primarily set forth in its Acquisition Management Directive 102-01 and Acquisition Management Instruction 102-01-001. DHS issued the initial version of this directive in November 2008 in an effort to establish an acquisition management system that effectively provides required capability to operators in support of the department’s missions. DHS has issued multiple updates to its acquisition management directive and instruction, in part to be responsive to GAO’s recommendations. DHS issued the current version of the directive in February 2019 and the current version of the instruction in May 2019; however, we did not assess programs against these updates because the programs in our review established initial baselines prior to the approval of the directive and instruction. DHS’s Under Secretary for Management is currently designated as the department’s Chief Acquisition Officer and, as such, is responsible for managing the implementation of the department’s acquisition policies. DHS’s Under Secretary for Management serves as the acquisition decision authority for the department’s largest acquisition programs, those with LCCEs of $1 billion or greater. Component Acquisition Executives—the most senior acquisition management officials within each of DHS’s components—may be delegated acquisition decision authority for programs with cost estimates between $300 million and less than $1 billion. Table 1 identifies how DHS has categorized the 29 major acquisition programs we reviewed in this report, and table 8 in appendix II specifically identifies the programs within each level. DHS acquisition management policy establishes that a major acquisition program’s decision authority shall review the program at a series of predetermined acquisition decision events (ADE) to assess whether the major program is ready to proceed through the acquisition life cycle phases. Depending on the program, these events can occur within months of each other or be spread over several years. Figure 1 depicts the acquisition life cycle in the March 2016 version of DHS acquisition management policy. An important aspect of an ADE is the decision authority’s review and approval of key acquisition documents. See table 2 for a description of the type of key acquisition documents identified in the March 2016 acquisition management directive and instruction that required department-level approval before a program moves to the next acquisition phase. DHS acquisition management policy establishes that the APB is the agreement between program, component, and department-level officials establishing how systems being acquired will perform, when they will be delivered, and what they will cost. Specifically, the APB establishes a program’s schedule, costs, and key performance parameters. DHS defines key performance parameters as a program’s most important and non-negotiable requirements that a system must meet to fulfill its fundamental purpose. For example, a key performance parameter for an aircraft may be airspeed and a key performance parameter for a surveillance system may be detection range. The APB establishes objective (target) and threshold (maximum acceptable for cost, latest acceptable for schedule, and minimum acceptable for performance) baselines. According to DHS policy, if a program fails to meet any schedule, cost, or performance threshold approved in the APB, it is considered to be in breach. Programs in breach are required to notify their acquisition decision authority and develop a remediation plan that outlines a timeframe for the program to return to its APB parameters, re-baseline—that is, establish new schedule, cost, or performance goals—or have a DHS-led program review that results in recommendations for a revised baseline. In addition to the acquisition decision authority, other bodies and senior officials support DHS’s acquisition management function: The Acquisition Review Board reviews major acquisition programs for proper management, oversight, accountability, and alignment with the department’s strategic functions at ADEs and other meetings as needed. The board is chaired by the acquisition decision authority or a designee and consists of individuals who manage DHS’s mission objectives, resources, and contracts. The Office of Program Accountability and Risk Management (PARM) is responsible for DHS’s overall acquisition governance process, supports the Acquisition Review Board, and reports directly to the Under Secretary for Management. PARM develops and updates program management policies and practices, reviews major programs, provides guidance for workforce planning activities, provides support to program managers, and collects program performance data. Components, such as U.S. Customs and Border Protection, the Transportation Security Administration, and the U.S. Coast Guard sponsor specific acquisition programs. The head of each component is responsible for oversight of major acquisition programs once the programs complete delivery of all planned capabilities to end users. Component Acquisition Executives within the components are responsible for overseeing the execution of their respective portfolios. Program management offices, also within the components, are responsible for planning and executing DHS’s individual programs. They are expected to do so within the cost, schedule, and performance parameters established in their APBs. If they cannot do so, programs are considered to be in breach and must take specific steps, as noted above. Figure 2 depicts the relationship between acquisition managers at the department, component, and program level. Requirements Development Process In 2016, we found that DHS had not effectively implemented or adhered to its review process for major acquisitions and recommended that DHS reinstate the Joint Requirements Council (JRC) to review and approve acquisition requirements and assess potential duplication of effort across the department. DHS established a JRC to develop and lead a component-driven joint requirements process for the department. In March 2016, DHS revised its policy instruction to reflect the addition of the JRC as an acquisition oversight body. Among other responsibilities, the JRC is to provide requirements-related advice and validate key acquisition documentation to prioritize requirements and inform DHS investment decisions among its components. The JRC chair is a member of the Acquisition Review Board and advises the board on capability gaps, needs, and requirements at key milestones in the acquisition life cycle. In March 2019, we reported that the JRC could better fulfill its mission by identifying overlapping or common requirements, and by making recommendations to senior leadership to inform budget decisions and help ensure that DHS uses its finite investment resources wisely. We will continue to monitor the JRC’s efforts through GAO’s high risk work. Test and Evaluation Policy In May 2009, DHS established policies that describe processes for testing the capabilities delivered by the department’s major acquisition programs. The primary purpose of test and evaluation is to provide timely, accurate information to managers, decision makers, and other stakeholders to reduce programmatic, financial, schedule, and performance risks. We provide an overview of each of the 29 programs’ test activities in the individual program assessments presented in appendix I. DHS testing policy assigns specific responsibilities to particular individuals and entities throughout the department: Program managers have overall responsibility for planning and executing their programs’ testing strategies, including scheduling and funding test activities and delivering systems for testing. They are also responsible for controlling developmental testing, which is used to assist in the development and maturation of products, manufacturing, or support processes. Developmental testing includes engineering- type tests used to verify that design risks are minimized, substantiate achievement of contract technical performance, and certify readiness for operational testing. Operational test agents are responsible for planning, conducting, and reporting on operational test and evaluation to identify whether a system can meet its key performance parameters and provide an evaluation of the operational effectiveness, suitability, and cybersecurity of a system in a realistic environment. Operational effectiveness refers to the overall ability of a system to provide a desired capability when used by representative personnel. Operational suitability refers to the degree to which a system can be placed into field use and sustained satisfactorily. Operational cybersecurity refers to the degree to which a system is able to accomplish its mission in a cyber-contested environment. The operational test agents may be organic to the component, another government agency, or a contractor, but must be independent of the developer to present credible, objective, and unbiased conclusions. The Director, Office of Test and Evaluation is responsible for approving major acquisition programs’ operational test agent and test and evaluation master plans, among other things. A program’s test and evaluation master plan must describe the developmental and operational testing needed to determine technical performance and operational effectiveness, suitability, and cybersecurity. As appropriate, the Director is also responsible for observing operational tests, reviewing operational test agents’ reports, and assessing the reports. Prior to a program’s ADE 3, the Director provides the program’s acquisition decision authority a letter of assessment that includes an appraisal of the program’s operational test, a concurrence or non-concurrence with the operational test agent’s evaluation, and any further independent analysis. As an acquisition program proceeds through its life cycle, the testing emphasis moves gradually from developmental testing to operational testing. See figure 3. Resource Allocation Process DHS has established a planning, programming, budgeting, and execution process to allocate resources to acquisition programs and other entities throughout the department. DHS uses this process to produce the department’s annual budget request and multi-year funding plans presented in the FYHSP report, a database that contains, among other things, 5-year funding plans for DHS’s major acquisition programs. According to DHS guidance, the 5-year plans should allow the department to achieve its goals more efficiently than an incremental approach based on 1-year plans. DHS guidance also states that the FYHSP articulates how the department will achieve its strategic goals within fiscal constraints. At the outset of the annual resource allocation process, the department’s Office of Strategy, Policy, and Plans and Office of the Chief Financial Officer provide planning and fiscal guidance, respectively, to the department’s components. In accordance with this guidance, the components should submit 5-year funding plans to the Chief Financial Officer. These plans are subsequently reviewed by DHS’s senior leaders, including the DHS Secretary and Deputy Secretary. DHS’s senior leaders are expected to modify the plans in accordance with their priorities and assessments, and they document their decisions in formal resource allocation decision memorandums. DHS submits the revised funding plans to the Office of Management and Budget, which uses them to inform the President’s annual budget request—a document sent to Congress requesting new budget authority for federal programs, among other things. In some cases, the funding appropriated to certain accounts in a given fiscal year remains available for obligation and can be carried over to subsequent fiscal years. Figure 4 depicts DHS’s annual resource allocation process. Federal law requires DHS to submit an annual FYHSP report to Congress at or about the same time as the President’s budget request. Two offices within DHS’s Office of the Chief Financial Officer support the annual resource allocation process: The Office of Program Analysis and Evaluation (PA&E) is responsible for establishing policies for the annual resource allocation process and overseeing the development of the FYHSP. In this role, PA&E develops the Chief Financial Officer’s planning and fiscal guidance, reviews the components’ 5-year funding plans, advises DHS’s senior leaders on resource allocation issues, maintains the FYHSP database, and submits the annual FYHSP report to Congress. The Cost Analysis Division is responsible for reviewing, analyzing, and evaluating acquisition programs’ LCCEs to ensure the cost of DHS programs are presented accurately and completely, in support of resource requests. This division also supports affordability assessments of the department’s budget, in coordination with PA&E, and develops independent cost analyses for major acquisition programs and independent cost estimates upon request by DHS’s Under Secretary for Management or Chief Financial Officer. Reflecting Improvements Since 2018, 25 of 27 Programs Are on Track to Meet Current Schedule and Cost Goals, with Two Programs Breaching Goals Of the 27 programs we assessed with approved APBs, 25 are on track to meet their current schedule and cost goals as of August 2019. Of these 25 programs, 11 programs revised their schedule and cost goals in response to a prior breach of their APBs or to incorporate program changes. Of the 27 programs, two programs breached their schedule or cost goals between January 2018 and August 2019, and as of August 2019 had not yet re-baselined. This shows improvement from our prior review where seven programs were in breach. In addition, some programs, although currently on track to meet their goals, are nonetheless facing risks of breaching schedule or cost goals, or have plans to revise their baseline in the future. Further, as a result of the fiscal year 2019 partial government shutdown, five programs received approval for schedule adjustments, and other programs reported difficulty obligating funds before the end of the fiscal year. Finally, our analysis showed that seven programs are projected to experience an acquisition funding gap in fiscal year 2020, but, according to program officials, these gaps will be mitigated. We also reviewed two programs that were early in the acquisition process and planned to establish department-approved schedule and cost goals during our review. However, these programs were delayed in getting department approval for their initial APBs for various reasons; therefore, we excluded them from our assessment of whether programs were on track to meet schedule and cost goals. We plan to assess these programs in our future reviews; however, we provide more details on these two programs in the individual assessments in appendix I. Table 3 summarizes our findings regarding the status of major acquisition programs meeting their schedule and cost goals, and we present more detailed information after the table. Twenty-five of 27 Programs on Track to Meet Schedule and Cost Goals as of August 2019 We found that 25 of 27 programs we reviewed with department-approved APBs were on track to meet their current baseline schedule and cost goals as of August 2019. Of these, 11 programs met schedule and cost goals established prior to December 2017. Six of these programs are in the process of revising their baselines or plan to revise their baselines in the near future to account for program changes or to add capabilities. For example, the U.S. Coast Guard’s Fast Response Cutter and National Security Cutter programs plan to revise their baselines because they received additional funding to procure more cutters than reflected in their current baselines. Program officials said these programs are planning to update their APBs in fiscal year 2020 to reflect these changes. In addition, as shown in table 3, five of the 25 programs that met schedule and cost goals had only recently established initial APBs (between January 2018 and August 2019). Three of these five—Customs and Border Protection’s Biometric Entry-Exit program and Border Wall System Program, and the U.S. Coast Guard’s Polar Security Cutter—are new Level 1 major acquisition programs and as of August 2019 their combined life cycle costs were approximately $15 billion. In addition, DHS recently approved baselines for two Transportation Security Administration programs—Advanced Technology and Credential Authentication Technology. These programs were previously projects under the Passenger Screening Program, but according to Transportation Security Administration officials, transitioned into standalone programs to better align program office staffing to capabilities and focus on mitigating capability gaps, among other things. Eleven of the 25 Programs on Track Had Revised Their Schedule and Cost Goals Eleven of the 25 programs that we found to be on track to meet current schedule and cost goals revised schedule and cost goals between January 2018 and August 2019. DHS leadership approved revised baselines for these programs for two primary reasons: to remove the program from breach status or to account for program changes, or both. Five of the 11 programs that revised their baselines did so in response to a breach of their cost or schedule goals and were subsequently removed from breach status. See table 4. DHS leadership approved revised baselines for these five programs following various actions by the program offices such as: Customs and Border Protection’s Automated Commercial Environment breached its cost and schedule goals in April 2017, which Customs and Border Protection officials attribute to an underestimation of the level of effort needed to complete development. The program revised its approach to developing remaining functionality by removing some capability from the program’s baseline and delaying development until funding is provided. As shown in table 4, the full operational capability date was delayed. The program’s total life-cycle cost increase is primarily attributed to a change in how threshold cost goals were calculated. Customs and Border Protection’s Medium Lift Helicopter re- baselined following a schedule breach of its ADE 3, among other things. As part of the re-baselining efforts, the program revised its cost goals to remove personnel costs and update the aircraft operational hours, among other things, which then resulted in a cost decrease of $515 million. Officials reported that the effect of the breach on the program’s schedule was minimal because the program was able to make adjustments to its testing schedule to assess multiple aircraft concurrently. DHS Management Directorate’s Homeland Advanced Recognition Technology re-baselined following multiple delays in awarding contracts and issues stemming from a subsequent bid protest. The re-baseline included a cost goal decrease resulting from an enhanced solution for biometric data storage. U.S. Coast Guard’s H-65 Conversion - Sustainment Program re- baselined to address delays which USCG officials primarily attributed to underestimating the technical effort necessary to meet requirements. As part of the re-baseline, the program also added a service life extension program to extend aircraft service life by replacing obsolete components. The program’s total life-cycle cost threshold decreased by approximately $200 million from its prior APB. Coast Guard officials attribute the decrease to the program’s ability to reduce labor costs, among other things, by synchronizing the service life extension program with other aircraft upgrades. U.S. Citizenship and Immigration Services’ Transformation program re-baselined in June 2018—lifting a strategic pause that limited new program development for 18 months. The program’s revised APB reflects a re-organization of the Transformation program as well as a new development strategy. The program breached its schedule in September 2016 when it failed to upgrade U.S. Citizenship and Immigration Services’ application processing information system to include applications for naturalization. In addition, between January 2018 and August 2019, DHS leadership approved revisions to six programs’ baselines that were not prompted by a breach. These programs either planned to revise their baselines to incorporate changes in technology, among other things, or to make changes to their scope. Customs and Border Protection’s Biometric Entry-Exit program revised its schedule goals in March 2019—after establishing an initial baseline in May 2018—to remove ADE 2C, the decision event when low-rate initial production is typically approved. Customs and Border Protection’s Border Wall System Program revised its baseline in August 2018 to replace sections of the border wall system in the San Diego sector. In addition, in May 2019 the program received approval for an additional baseline to extend the border wall system in the Rio Grande Valley sector. Customs and Border Protection’s Multi-role Enforcement Aircraft revised its baseline to increase the program’s quantity from 16 to 29 aircraft. The 16 aircraft from the prior APB provided maritime interdiction capabilities. The additional 13 aircraft are for air interdiction capabilities. Cybersecurity and Infrastructure Security Agency’s National Cybersecurity Protection System Program revised its baseline in January 2018 to inform ADEs for the program’s information sharing and intrusion-prevention capabilities and to account for schedule and cost changes after bid protests. However, the program updated its APB again in October 2018 to address an error found in the LCCE. Specifically, the LCCE that provided the basis for the program’s APB cost goals did not accurately account for the program’s sunk costs. In addition, the program added an additional 2 years of costs to its LCCE and revised its approach to estimating threshold costs. Once revised, the program’s total life-cycle cost threshold increased by more than $1.7 billion (41 percent) from the program’s January 2018 APB. The program’s full operational capability date was extended by two years to March 2021. Cybersecurity and Infrastructure Security Agency’s Next Generation Networks Priority Services revised its baseline in April 2018 to add capability to provide priority access for landline telephone calls to select government officials during emergencies. As a result, the program’s full operational capability date was extended by 3 years—to December 2025—and total acquisition costs increased by $68 million (10 percent). Transportation Security Administration’s Technology Infrastructure Modernization program revised its baseline in July 2019 to de-scope the program and narrow the definition of full operational capability. DHS leadership reported that by the time the program had delivered functions needed to meet the needs of end users, the Transportation Security Administration had updated and improved its legacy systems. As a result, costs decreased by $15 million (1 percent) and the program achieved full operational capability 3 years earlier than previously planned. Two Programs Breached Schedule or Cost Goals and Some Programs Are at Risk of Breaching Goals in the Future Between January 2018 and August 2019, two programs breached their schedule or cost goals—down from seven programs in our previous assessment. As of August 2019, neither of these programs had revised their baselines. Customs and Border Protection’s Integrated Fixed Towers program declared a schedule breach of the program’s baseline in February 2019 as a result of delays in negotiations with the Tohono O’odham Nation—a sovereign Native American Nation—regarding access to tribal lands to construct towers and deploy systems. Customs and Border Protection subsequently reached an agreement with the Nation in March 2019. As of September 2019, the program was in the process of revising its APB to adjust deployments within the Nation’s land. Program officials anticipate the program’s full operational capability date will slip from September 2020 to March 2021 as a result of these actions. Transportation Security Administration’s Electronic Baggage Screening Program updated its LCCE in August 2019 which exceeds its baseline operations and maintenance (O&M) cost threshold. Transportation Security Administration officials attribute the program’s cost breach to an increase in maintenance costs related to sustaining screening technologies longer than initially planned. As of September 2019, the program’s revised APB, which TSA officials said will address the O&M cost increase, had not yet been approved. In addition, some of the programs on track as of August 2019 are facing risks that might lead to schedule slips or cost growth in the future. For example, U.S. Coast Guard’s Offshore Patrol Cutter may experience cost increases and schedule slips in the future. Specifically, the program’s shipbuilder reported damages from Hurricane Michael in October 2018 that have resulted in a long-term degradation of its ability to produce the Offshore Patrol Cutters at the previously estimated cost and schedule. As of August 2019, the Coast Guard was still assessing the shipbuilder’s report on the damage sustained and the potential effect on the Offshore Patrol Cutter program. U.S. Coast Guard’s Polar Security Cutter met established cost and schedule milestones between January 2018 and August 2019, but program officials stated that they anticipate a schedule slip because delivery of the lead ship in the awarded contract is two months after the program’s APB threshold date. We previously found that the program is at risk of experiencing future schedule delays and cost growth. The program’s schedule is driven by the need to address a potential gap in icebreaking capabilities once the Coast Guard’s only operational heavy polar icebreaker reaches the end of its service life as early as 2023. As a result, planned delivery dates are not informed by a realistic assessment of shipbuilding activities. We also found that the program is at risk of costing more than estimated because its LCCE—while adhering to most cost estimating best practices—is not fully reliable as it did not quantify the range of possible costs over the entire life of the program. Customs and Border Protection’s Biometric Entry-Exit program plans to re-baseline and achieve ADE 3—which will authorize full-rate production—in September 2019. However, program officials stated that not all testing will be completed to inform the ADE 3. As a result, DHS leadership will not have data related to the Biometric Entry-Exit system’s resiliency to cyberattacks before making this decision. We provide more information in the individual program assessments in appendix I, and we will continue to monitor these programs in future assessments. Effects from 2019 Partial Government Shutdown Include Schedule Milestone Adjustments and Difficulty Obligating Funds Due to a lapse in appropriations for fiscal year 2019, the federal government partially shut down from December 22, 2018, to January 25, 2019. Most Level 1 and Level 2 acquisition program staff were furloughed during the partial government shutdown, which affected the execution of these programs. As a result, in March 2019, DHS’s Under Secretary for Management, in coordination with PARM, authorized Component Acquisition Executives to request up to a 3-month extension for any program schedule milestone date, and inform PARM of any proposed changes in writing. PARM officials stated that they developed this process to mitigate program schedule risks since the government shutdown was beyond the control of program officials. Five programs requested and received approval from DHS leadership to extend schedule milestones by 3 months. Of these, three programs reported that the 3month extension will allow the programs to stay on track to meet their adjusted milestones—Federal Emergency Management Agency’s Logistics Supply Chain Management System, Customs and Border Protection’s Biometric Entry-Exit, and U.S. Coast Guard’s Medium Range Surveillance Aircraft programs. However, Coast Guard officials stated that the Offshore Patrol Cutter program requested approval to extend the program’s ADE 2C milestone to enable Coast Guard officials time to assess the shipbuilder’s report on damage caused by Hurricane Michael before determining the next steps for the program. The Cybersecurity and Infrastructure Security Agency’s Continuous Diagnostics and Mitigation program received approval to extend two schedule milestones—initial operational capability for two segments of the program—because the program experienced delays as a result of the partial government shutdown. In addition, DHS leadership previously directed the program to conduct an ADE 2B for a new segment by March 2019. The ADE 2B has been delayed 9 months to December 2019 to allow the program additional time to complete required acquisition documentation to inform the ADE. Programs also reported experiencing other effects of the partial government shutdown. Specifically, officials from several programs identified challenges in obligating funds by the end of the fiscal year due to the truncated timeframe. For example, Transportation Security Administration’s Electronic Baggage Screening Program officials reported that as a result of the partial government shutdown, contract awards had been delayed. These officials explained that contracting obligation activities from the component were compressed into the last two quarters of fiscal year 2019 and the program had to compete for contracting officer resources within the limited timeframe. Affordability Gaps Reported in DHS’s 2020- 2024 Funding Plan Are Generally Mitigated by Funding from Other Sources Based on the information presented in the 2020-2024 FYHSP report to Congress, DHS’s acquisition portfolio is not affordable over the next 5 years, meaning that the anticipated funding will not be adequate to support the programs. But our analysis found the reported acquisition funding gaps may be overstated when additional information is taken into account. For example, the fiscal year 2020-2024 FYHSP report contained acquisition affordability tables for 21 of the 27 programs we assessed that have approved APBs. Of the 21 programs included in the FYHSP report, 11 were projected to have an acquisition affordability gap in fiscal year 2020. However, some of the cost information used to develop these projections was outdated since the FYHSP report—which was issued in August 2019—relied on cost estimates developed in April 2018. Therefore, we updated the analysis using the programs’ current LCCEs based on the approved scope of the program, as of August 2019 (as presented in the individual assessments in appendix I). In addition, we discussed funding gaps with program officials to determine additional funding sources, such as fees collected, funding from previous fiscal years that remained available for obligation—known as carryover funding, funds provided by components, or funding received above what was originally requested. Based on our analysis, we found that seven programs may have acquisition funding gaps in fiscal year 2020 rather than the 11 identified in the FYHSP report. However, the affordability gap for all seven programs we identified may be overstated because program officials reported that these programs either had carryover funding, received funding above what was requested, or anticipate receiving funding from the component to mitigate the affordability gap, as shown in table 5. Further, officials from several programs in our review told us that the programs were projected to experience a funding gap that could cause future program execution challenges, such as cost growth, or that programs were taking steps to mitigate funding gaps. For example, Customs and Border Protection’s Biometric Entry-Exit program—which is primarily fee-funded—conducted an affordability analysis that showed projected fees had declined. To mitigate risks of a potential affordability gap, program officials stated the number of officers to conduct enforcement activities at airport departure gates was reduced and the program is working with the component to identify other sources of funding. In addition, DHS Management Directorate’s Homeland Advanced Recognition Technology program reported that the program will use carryover funding to address the program’s affordability gap in fiscal year 2020. However, the program will also need to defer development of some additional capabilities to 2021 to remain affordable. In addition, officials from Customs and Border Protection’s Border Wall System Program stated the program is mitigating future acquisition funding gaps, in part by not developing its baseline until after funding amounts are determined. According to officials, this was necessary to mitigate program risks due to uncertainty in funding; however, through DHS’s resource allocation process, the program has requested $5 billion each year from fiscal year 2020 to fiscal year 2024. We elaborate on programs’ affordability over the next 5 years in the individual program assessments in appendix I. Cost and Performance Goals Generally Trace to Required Documents, but Schedule Goals Do Not Traceability, which DHS policy and acquisition best practices call for, helps ensure that program goals are aligned with program execution plans, and that a program’s various stakeholders have an accurate and consistent understanding of those plans and goals. We found that the cost and performance goals in the acquisition programs’ approved APBs generally traced to the estimated costs identified in LCCEs and key performance parameters identified in operational requirements documents. That is, information in the APB matched the document required to be used as the basis for the baselines. In contrast, the schedule goals in the approved APBs generally did not trace to the Integrated Master Schedule (IMS), as required by the DHS acquisition management instruction and as a best practice identified in GAO’s Schedule Assessment Guide. Similarly, we found the required basis for the cost and performance goals is consistently identified in DHS acquisition management policy and guidance, whereas the basis for the schedule goals is not. Acquisition Program Baselines Generally Trace to Required Cost and Performance Documents, but Not to Schedule Documents We found that cost and performance goals in approved APBs generally traced to estimated costs in LCCEs and key performance parameters in operational requirements documents. However, schedule goals were generally not traceable to the IMSs, as required by DHS acquisition management instruction and as identified as a best practice in GAO’s Schedule Assessment Guide. Of the 27 programs we assessed with established baselines, 21 established or revised their APBs after DHS updated its acquisition management instruction in March 2016, which was the most current version of the guidance when we initiated our review. Table 6 shows the results of our analysis for the traceability of baselines to cost, schedule, and performance documents for those 21 programs. As shown in table 6, the APB goals traced to the key performance parameters in the operational requirements documents for all 21 programs that we reviewed. Generally, the APB goals traced to the costs in the LCCEs, though we found that three programs were not traceable. For example: The APB total life-cycle cost goals for Custom and Border Protection’s Tactical Communications Modernization program traced to the program’s LCCE, but the separate acquisitions and O&M costs were not traceable. The Transportation Security Administration’s Electronic Baggage Screening Program did not include sunk costs in the LCCE, and as a result the APB cost goals did not trace. In contrast, we could trace all schedule events and dates in the approved APBs to the programs’ IMS for only six of 21 programs. There was variation in how the programs’ APBs lacked traceability to the IMS. For example: The IMS for the Customs and Border Protection’s Border Wall System Program estimates the full operational capability dates to be between October 2021 and December 2021, whereas the approved APB includes an objective date of October 2022 and a threshold date of December 2022. The APB for the U.S. Citizenship and Immigration Services’ Transformation program does not identify a source for the schedule baseline. Program officials told us that they do not have an IMS and instead they use the schedule in the program’s release roadmap, a document that information technology programs use to communicate how they will iteratively deliver features. However, schedule events identified in the APB, such as full operational capability, were not identified in the release roadmap. Similarly, we found programs that developed an IMS but did not include all future APB milestones, such as Cybersecurity and Infrastructure Security Agency’s Continuous Diagnostics Mitigation and Transportation Security Administration’s Credential Authentication Technology. According to GAO’s Schedule Assessment Guide, schedules should be verified to ensure that they are vertically traceable—that is, verified to ensure the consistency of dates, status, and scope requirements between different levels of the schedule and management documents. Further, this guide states that a schedule baseline signifies a consensus of stakeholders on the required sequence of events, resources, and key dates. The IMS is more accurate when stakeholders agree on the underlying assumptions. These stakeholders would include, for example, program offices, end users, and component and DHS leadership. Further, DHS acquisition policy requires programs to obtain review and approval of LCCEs and operational requirements documents from various stakeholders within components and DHS headquarters. However, DHS acquisition policy states that approval of IMSs is based on DHS guidance and component policy and that program managers will provide the IMS to DHS in support of the acquisition review process. Officials from PARM and the Office of the Chief Financial Officer told us that the components vary in their capacity to develop schedules and assess schedule risks and there is a lack of expertise within the department to review program schedules. The lack of traceability between IMSs and schedule goals in the APB indicates that DHS does not have an appropriate oversight process in place to ensure APBs trace to schedule goals in the IMSs, in accordance with DHS policy and GAO’s best practices. Without this traceability, DHS cannot ensure that the understanding of program schedules among different stakeholders is consistent and accurate. As a result, DHS leadership may be approving program schedule goals that do not align with program execution plans. DHS Acquisition Policy and Guidance Consistently Identifies the Source of Cost and Performance Goals but Not of Schedule Goals We found that LCCEs and operational requirements documents are consistently identified as the basis of cost and performance goals in DHS’s acquisition management policy and guidance. However, we also found that the documents do not consistently require that an IMS be used as the basis of schedule goals. Specifically, DHS’s acquisition management instruction and DHS’s Systems Engineering Life Cycle Guidebook—which outlines the technical framework for DHS’s acquisition management system—differ regarding the source of APB schedule milestone dates. Table 7 summarizes our findings on DHS’s acquisition policy and guidance related to developing APB cost, schedule, and performance goals. DHS’s acquisition management instruction states that the APB should trace to the IMS, which is consistent with GAO’s Schedule Assessment Guide. This instruction differs from the guidance in the Systems Engineering Life Cycle Guidebook, which in contrast, directs programs to use the APB as an input when developing the IMS. PARM officials said they were unaware of the inconsistency and confirmed that the IMS should provide the basis of APB schedule goals, as identified in DHS’s acquisition management instruction. PARM officials also acknowledged that the information related to schedule development should be consistent across all of DHS’s policies, instructions, and guidebooks. Conflicting agency-wide policy and guidance can lead to a lack of clarity and consistency on how programs develop their schedules. In addition, the lack of a well-developed schedule can contribute to poor acquisition outcomes, such as increased costs and delayed delivery of capabilities needed by end users. As previously noted, DHS’s 2019 update to its acquisition management directive and associated instruction addressed a GAO recommendation related to better defining requirements before establishing acquisition program baselines. PARM officials told us they plan to update the Systems Engineering Life Cycle Guidebook by the end of calendar year 2019 to account for the revisions in the acquisition management directive and associated instruction. At that time, they also plan to correct the inconsistency related to the documents used to develop APB schedule goals. Conclusions Since we began reviewing DHS’s portfolio of major acquisitions in 2015, the department has strengthened implementation of its policies to improve acquisition oversight. These efforts have begun to yield better results as the performance of DHS’s major acquisition portfolio has improved compared to our last review. As DHS major acquisition policy has evolved over time, the department has put in place oversight and approval processes that help ensure cost and performance goals are clear, consistent, and trace to key acquisition documents serving as the basis for those goals. However, opportunities remain for DHS to provide better oversight of major acquisition programs’ schedule goals, as we found that these goals generally did not trace to the integrated master schedules per DHS policy. When schedule goals are not traceable, DHS decision makers cannot be sure that the schedule presented is consistent and accurate. Until DHS develops an oversight process to ensure schedules are developed and updated appropriately, the department cannot ensure that its most expensive acquisition programs are able to deliver capabilities needed by end users when promised. In addition, we found inconsistencies within DHS’s major acquisition policy and system engineering guidance in identifying the basis of schedule goals. Without consistent schedule development guidance, DHS has no way of knowing that programs establish schedules in a consistent manner and in accordance with GAO’s scheduling best practices. Recommendations for Executive Action We are making the following two recommendations to DHS. The Secretary of Homeland Security should ensure that the Undersecretary for Management develops an oversight process to confirm that programs’ schedule goals are developed and updated in accordance with GAO’s Schedule Assessment Guide, to include ensuring traceability between APB schedule goals and IMSs. (Recommendation 1) The Secretary of Homeland Security should ensure that the Undersecretary for Management revises the schedule development guidance in the Systems Engineering Life Cycle Guidebook to state clearly that an IMS should be used as the basis for APB schedule goals. (Recommendation 2) Agency Comments We provided a draft of this report to DHS for review and comment. DHS’s comments are reproduced in appendix III. DHS also provided technical comments which we incorporated as appropriate. In its comments, DHS concurred with both of our recommendations and identified actions it planned to take to address them. We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Program Assessments This appendix presents individual assessments for each of the 29 programs we reviewed. Each assessment presents information current as of August 2019. They include standard elements, such as an image, a program description, and summaries of the program’s progress in meeting cost and schedule goals, performance and testing activities, and program management-related issues, such as staffing. The information presented in these assessments was obtained from the Department of Homeland Security (DHS) documentation, answers to our questionnaire by DHS officials, interviews with program officials, and includes our analysis of program information. Each assessment also includes the following figures: Fiscal Years 2020–2024 Affordability. This figure compares the funding plan presented in the Future Years Homeland Security Program report to Congress for fiscal years 2020-2024 to the program’s current cost estimate. We use this funding plan because the data are approved by DHS and Office of Management and Budget, and was submitted to Congress to inform the fiscal year 2020 budget process. The data do not account for other potential funding sources, such as carryover funding. Acquisition Program Baseline (APB) Thresholds vs. Current Estimate. This figure compares the program’s cost thresholds from the initial APB approved after DHS’s acquisition management policy went into effect in November 2008 and the program’s current DHS-approved APB to the program’s expected costs as of August 2019. The source for the current estimate is the most recent cost data we obtained (i.e., a department-approved life-cycle cost estimate, updated life-cycle cost estimates submitted during the resource allocation process to inform the fiscal year 2020 budget request, or a fiscal year 2019 annual life-cycle cost estimate update). Schedule Changes. This figure consists of two timelines that identify key milestones for the program. The first timeline is based on the initial APB DHS leadership approved after the department’s current acquisition management policy went into effect. The second timeline identifies when the program expected to reach its major milestones as of August 2019 and includes milestones introduced after the program’s initial APB. Dates shown are based on the program’s APB threshold dates or updates provided by the program office. Test Status. This table identifies key recent and upcoming test events. It also includes DHS’s Director, Office of Test and Evaluation’s assessment of programs’ test results, if an assessment was conducted. Staffing Profile. This figure identifies the total number of staff a program needs (measured in full time equivalents) including how many are considered critical and how many staff the program actually has. Lastly, each program assessment summarizes comments provided by the program office and identifies whether the program provided technical comments. Page left blank intentionally. AUTOMATED COMMERCIAL ENVIRONMENT (ACE) CUSTOMS AND BORDER PROTECTION (CBP) The ACE program is developing software that will electronically collect and process information submitted by the international trade community. ACE is intended to provide private and public sector stakeholders access to information, enhance the government’s ability to determine whether cargo should be admitted into the United States, increase the efficiency of operations at U.S. ports by eliminating manual and duplicative trade processes, and enable faster decision making. Program completed operational testing in June 2018, but cybersecurity was not tested. Collections functionality will remain in the legacy system until additional funding is provided for development. GAO last reported on this program in May 2018 and March 2018 (GAO-18-339SP, GAO-18- 271). Not included Following a cost and schedule breach in April 2017, CBP separated the ACE program’s Collections functionality—which collects and processes duties owed on imported goods—from its Core functionality to permit deployment of the other post-release capabilities, such as Liquidations and Reconciliation. CBP previously reported that officials were not versed in the complexities of collection in the legacy system and underestimated the level of effort required to integrate Collections capabilities into ACE. In August 2018, the program received Department of Homeland Security (DHS) approval to defer Collections functionality as an unfunded requirement. CBP officials said the Collections functionality will remain in the legacy system until funding for development is provided. ACE continued deployment of the Core functionality and updated acquisition documents including the program’s acquisition program baseline (APB) and life-cycle cost estimate (LCCE) to reflect the program changes. DHS leadership approved the program’s updated APB in November 2018—removing the program from breach status. The program achieved full operational capability (FOC) for Core functionality and received acquisition decision event (ADE) 3 approval in November 2018— approximately 2 years later than initially planned. Although the program removed costs associated with Collections functionality, the program’s total APB cost threshold increased by more than $500 million from its prior APB. This cost increase is primarily the result of a change in the way the program’s threshold costs were calculated. CBP officials estimated the total cost of decoupling Collections from ACE’s remaining functionality to be $30 million. In March 2019, the program received funding and approval for ADE 2B for the first of four planned releases of Collections functionality, but did not receive funding for the remaining releases. CBP officials applied for Technology Modernization Funds (TMF). However, in September 2019, CBP officials stated that a decision on TMF funding had not yet been made. CBP officials estimated that it would take 18 months to move Collections into ACE. In June 2019, the program updated its LCCE to inform the budget process—the LCCE includes some costs for Collections functionality, but the total cost is not yet known. Customs and Border Protection (CBP) AUTOMATED COMMERCIAL ENVIRONMENT (ACE) • ACE Core functionality met all four of its key performance parameters. • ACE Core functionality is operationally suitable and operationally effective with limitations, primarily because the lack of a mature mass system update function for ESAR decreased the day-to-day operational efficiency. • Cybersecurity was not evaluated. DOT&E recommended that the program continue the development of the ESAR capabilities to improve operational effectiveness and conduct follow-on OT&E to ensure the issues are corrected. DOT&E also recommended that the program should conduct cybersecurity testing after submitting the test plan for DOT&E approval. In June 2019, CBP officials told GAO that the program plans to conduct follow-on OT&E by March 2020 and to begin cybersecurity testing in late fiscal year 2020, following the migration to cloud services. When DHS leadership re-baselined the ACE program in 2013, the program adopted an agile software development methodology to accelerate software creation and increase flexibility in the development process. The ACE program office oversees agile teams that conduct development and O&M activities. Staffing needs for ACE have decreased in the last year, which CBP officials attribute to the program completing most development efforts. These officials explained that staff from prior agile development teams were shifted to sustainment teams. In June 2019, CBP officials told GAO that, while ACE has some critical staffing gaps, these gaps have not affected program execution. CBP officials also stated that they plan to use existing contracts to address staffing needs for the Collections functionality, once funding for development is received. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The BEE program is intended to verify the identities of travelers leaving the United States at air, land, and sea ports of entries using biometric data, such as facial recognition. The program has developed a capability to match photos of departing travelers to their passport photos or photos obtained upon a traveler’s arrival into the United States to identify foreign nationals that stay in the United States beyond their authorized periods of admission. CBP is currently focused on the air segment. Program deploying capabilities beyond approved quantity without approval from leadership. CBP pursuing public/private partnerships to reduce costs. GAO last reported on this program in May 2018 and February 2017 (GAO-18-339SP, GAO-17-170). In May 2018, the Department of Homeland Security (DHS) leadership approved BEE’s initial acquisition program baseline (APB) which established the cost, schedule, and performance parameters for the air segment. DHS leadership subsequently granted the BEE program acquisition decision event (ADE) 2A approval for this segment and directed the program to return for a combined ADE 2B/C. DHS leadership delayed the program’s ADE 2B decision—which will authorize the program to initiate development of the air segment—from October 2018 to December 2018 to allow for the completion of the test and evaluation master plan (TEMP). However, in October 2018, CBP officials told GAO that the facial matching service was ready to support nationwide deployment, and the program was on track to reach its initial operational capability (IOC) of supporting 30 international flights per day by December 2018. DHS leadership approved the program’s request to remove ADE 2C—which would authorize low-rate production—from its APB and granted the program ADE 2B in December 2018. In March 2019, DHS leadership approved the program’s updated APB, which reflected schedule changes related to the TEMP, schedule slips related to the fiscal year 2019 partial government shutdown, and removal of ADE 2C. The program’s APB costs goals remained the same. CBP officials said the program plans to re-baseline and achieve ADE 3—which will authorize full-rate production—in September 2019. However, in June 2019, CBP officials told GAO the program has continued to deploy capabilities to airports and airlines—beyond those needed to achieve IOC. The BEE program is primarily funded by fees. Congress provided that half the amount collected from fee increases for certain visa applications from fiscal years 2016 through 2025—up to $1 billion—would be available to DHS until expended for the development and implementation of the BEE system. In February 2018, Congress extended this period through fiscal year 2027. CBP officials said the current funding structure poses challenges because fees fluctuate based on immigration rates. The program conducted an affordability analysis in 2018 that showed projected fees had fallen from $115 million per year to $56 million per year. To address the funding gap, the program reduced the number of officers conducting enforcement activities at airport departure gates and is working with CBP to identify other sources of funding. . Customs and Border Protection (CBP) Prior to initial OT&E, CBP had conducted a number of tests. For example, from 2013 to 2015, CBP completed a pilot of the air segment solution, among other technologies, to inform the acquisition of a BEE system. In March 2018, CBP completed developmental testing on the cloud-based facial matching service for the air segment, which demonstrated that functional requirements were met. Since 1996, several federal statutes have required development of an entry and exit system for foreign nationals. DHS has been exploring biometric exit capabilities since 2009 and an Executive Order issued in March 2017 directed DHS to expedite the implementation of the BEE system. CBP is pursuing public/private partnerships in which airlines and airports invest in the equipment to collect biometric data to reduce program costs and improve the passenger boarding process. In September 2019, CBP officials told GAO they have received commitment letters from 28 airports and airlines since March 2018 and officials expect to operate within the airports with the highest volume of international flights by October 2021. CBP officials also told GAO that the program works independently with airlines and airports and does not seek any component or department approvals before proceeding to deploy technologies. These officials stated they proceed in this manner because program stakeholders have been highly engaged since the program’s ADE 1, internal testing results have been positive, and the congressional mandate necessitates expediency. CBP officials said the program’s current staffing level is manageable, but they will need more staff in the future to help manage planned partnerships with airlines and airports. CBP provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The border wall system is intended to prevent the illegal entry of people, drugs, and other contraband by enhancing and adding to the 654 miles of existing barriers along the U.S. southern border. CBP plans to create a border enforcement zone between a primary barrier—such as a fence—and a secondary barrier. To establish the enforcement zone, the wall system may also include detection technology, surveillance cameras, lighting, and roads for maintenance and patrolling. Program establishes baselines as funding is received, but does not have a cost estimate to support funding plan. Current baselines do not account for all DHS and DOD border wall system construction efforts. GAO last reported on this program in July 2018 and May 2018 (GAO-18-614, GAO-18- 339SP). The Department of Homeland Security (DHS) plans to establish cost, schedule, and performance goals for each individual segment of the border wall system in separate acquisition program baselines (APB) as funding becomes available. The program’s current APBs were approved in May 2019 and account for segments funded in fiscal years 2018 and 2019, totaling nearly 123 miles of border wall system. DHS leadership approved a revised APB for the two segments funded in fiscal year 2018. This included cost and schedule goals for the replacement of an existing 14 miles of primary and secondary barriers in San Diego. It also refined the cost goals for an initial 60 mile segment in the Rio Grande Valley (RGV), because in the 2018 and 2019 Consolidated Appropriations Acts, Congress prohibited use of funds for construction in areas constituting about 4 miles. The program’s total cost for these efforts is nearly $2.2 billion. DHS leadership approved an initial APB for a second segment of nearly 53 miles in RGV in response to funding received in fiscal year 2019. The program’s total cost for this segment is approximately $2.6 billion. However, the design for this segment has not yet been approved, which could affect APB costs or schedule or both. In June 2019, to inform the budget process, the program developed a cost estimate that appears much greater than its APB goals because it reflects DHS’s funding request to Congress—not the current plans of the program. DHS officials reported that they did not have a cost estimate to support the requested amounts because the program develops acquisition documentation after funding becomes available. The current APBs do not account for related construction efforts that may limit oversight of the development of the entire border wall system. For example, in November 2018, CBP leadership was granted approval to oversee a segment replacing about 48 miles of primary pedestrian wall. Further, in February 2019, DHS requested that the Department of Defense (DOD) assist with the construction of infrastructure along the southern border. DOD agreed to provide support and is using $2.5 billion of DOD’s fiscal year 2019 funds to support these efforts. In September 2019 DOD officials identified an additional $3.6 billion, if needed. CBP officials told GAO that they provided a prioritized list of segments and construction standards to DOD, but said that they have limited insight into DOD’s planned efforts. 05/19 FY 2018 APB revised/ FY19 initial APB approved 03/23 FY 2018 segments full operational capability (FOC) Customs and Border Protection (CBP) In November 2017, the Science and Technology Directorate’s Office of Systems Engineering completed a technical assessment on the program and identified risks related to the integration and operation of enforcement zone technologies—such as cameras and sensors—which had not been clearly defined or planned for within the wall system. It made several recommendations, including that the program coordinate with an ongoing CBP study of land domain awareness capabilities, which DHS leadership directed CBP to conduct in October 2016 to inform a comprehensive border plan. CBP previously completed testing of eight barrier prototypes to help refine the requirements and identify new design standards for barriers. However, use of CBP funding appropriated for construction of fencing in the RGV for fiscal year 2018 and 2019 is restricted to operationally effective designs deployed as of May 5, 2017. The Border Wall System Program was initiated in response to an Executive Order issued in January 2017 stating that the executive branch is to secure the southern border through the immediate construction of a physical wall on the southern border of the United States. To expedite the acquisition planning process, CBP officials said they leveraged expertise from staff that worked on previous border fencing programs and were familiar with implementation challenges, such as land access. CBP intends to prioritize segments based on threat levels, land ownership, and geography, among other things. CBP plans to continue coordinating with the U.S. Army Corps of Engineers for engineering support and for awarding and overseeing the construction contracts. CBP officials stated that land access and acquisition issues are significant challenges and could affect the program’s ability to meet its schedule goals. CBP officials reported that the program has sufficient staff to manage the program’s work based on the funding received to date. The program’s unfilled staffing gaps are not yet funded positions. CBP officials stated that they will hire additional staff to fill the vacant positions once funding becomes available. CBP officials reviewed a draft of this assessment and provided no comments. CROSS BORDER TUNNEL THREAT (CBTT) CUSTOMS AND BORDER PROTECTION (CBP) The CBTT program is intended to help CBP identify, acquire, and implement operational services and technologies necessary to obtain subterranean domain awareness along the United States land border. These technologies will help CBP address existing gaps in the prediction, detection, confirmation, investigation, and remediation of cross border tunnels. CBP’s analysis of alternatives for detection capabilities identified a solution and CBP will conduct future analysis. Program performed two technology demonstrations, and CBP officials determined technologies were sufficient. GAO last reported on the program in August 2018 and May 2017 (GAO-18-550, GAO-17-474). Not included In August 2015, the Department of Homeland Security’s (DHS) Under Secretary for Management (USM) granted the CBTT program acquisition decision event (ADE) 1 approval. The program initiated work on an analysis of alternatives (AoA) in March 2016, which considered technologies to detect four CBP classifications of illicit tunnels—rudimentary, sophisticated, mechanically bored, and interconnecting tunnels—but yielded no results. Program leadership and stakeholders subsequently determined that the AoA should be refocused to address tunnel detection threats in seven high-risk operational areas and broadened to incorporate newer tunnel detection technologies, among other things. In May 2018, the AoA was completed and, based on its results, CBP identified a preferred system—a variation of a legacy tunnel detection system used by the Department of Defense (DOD). In June 2018, DHS leadership directed the program to continue technology demonstrations of upgrades to the legacy tunnel detection system in order to mitigate technical and operational risks and refine program requirements, including identification of the areas where the capability will be deployed. At that time, DHS leadership directed the program to return to the acquisition review board for a combined ADE 2A and 2B to establish an initial acquisition program baseline (APB) for tunnel detection capability. CBP officials said the program now plans to pursue only ADE 2A when it returns to the acquisition review board, per DHS’s revised acquisition policy. As of September 2019, the program had not yet completed key acquisition documents that will support the program’s APB. CBP officials told GAO that the program experienced delays in updating the acquisition documents—including the operational requirements document—for the detection capability as a result of continued work with stakeholders. The program continues to work with stakeholders to refine end- user requirements, determine testing needs, and complete a technical assessment. CBP officials told GAO that the program plans to use an incremental acquisition approach to address the other capability gaps. They added that the incremental approach is necessary because the capability gaps the program intends to address are broader than one system can cover. Customs and Border Protection (CBP) CROSS BORDER TUNNEL THREAT (CBTT) PERFORMANCE AND TESTING OPERATIONAL TEST AGENT (OTA): NOT APPLICABLE The AoA results indicated the preferred detection system solution outperformed alternative systems in detection of key tunnel types and activities at operationally significant depths in high-risk areas. The preferred detection system solution supports the program’s priorities of persistent surveillance and actionable information. The AoA scope focused on the capability to detect the presence of tunneling activities and project the trajectory of discovered tunnels. Other capabilities, like predicting tunnel location, will be addressed in future AoAs and technology demonstrations. In June 2019, CBP officials told GAO that, in response to direction from DHS leadership, the program successfully performed two limited technology demonstrations in high-risk operational areas. The first limited technology demonstration evaluated how the preferred tunnel detection system used by DOD operated in CBP’s border enforcement zone. The second limited technology demonstration, conducted by a contractor, evaluated a different system and software. Based on these technology demonstrations, CBP officials told GAO they determined the technologies were sufficient. CBP officials also told GAO the program plans to continue evaluating technologies in coordination with Border Patrol’s Requirements Division. In 2008, CBP began collaborating with the DHS Science and Technology Directorate, other federal partners, and private industry to develop and acquire tunnel detection technology. In September 2012, the DHS Inspector General found that CBP did not have the technological capability to detect illicit cross-border tunnels routinely and accurately. DHS leadership subsequently approved the CBTT Mission Needs Statement, which identified six capabilities—predict the location of illicit tunnels; detect the presence of suspected tunnels and tunneling activity and project the trajectory of a discovered tunnel; confirm a tunnel’s existence and map its location and measurements; investigate and exploit tunnels and tunnel activity; remediate discovered tunnels; and coordinate information sharing on tunnel threats. CBP officials stated that the CBTT Concept of Operations (CONOPS) was approved in June 2019. CBP officials also stated that the development of the CONOPS was informed by market research and AoA activities. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. INTEGRATED FIXED TOWERS (IFT) CUSTOMS AND BORDER PROTECTION (CBP) The IFT program helps the Border Patrol detect, track, identify, and classify illegal entries in remote areas. IFT consists of fixed surveillance tower systems equipped with ground surveillance radar, daylight and infrared cameras, and communications systems linking the towers to command and control centers. CBP plans to deliver or upgrade approximately 48 IFT systems across six areas of responsibility (AoR) in Arizona: Nogales, Douglas, Sonoita, Ajo, Tucson, and Casa Grande. System acceptance test completed in Sonoita AoR; all systems accepted by program. Border Patrol requested CBP add camera suites to address tower reductions in the Ajo and Casa Grande AoRs. GAO last reported on this program in May 2018 and November 2017 (GAO-18-339SP, GAO-18-119). The program declared a potential schedule breach in December 2017 because the program did not receive funding from the Department of Homeland Security (DHS) to address new IFT requirements, including camera upgrades and replacement of existing tower systems deployed in Tuscon and Ajo under a legacy program. In January 2018, CBP officials updated the program’s affordability analysis to reflect a reduction of IFT tower deployments—which mitigated the potential schedule breach. Specifically, a resolution passed within the Tohono O’odham Nation—a sovereign Native American Nation—that reduced the number of IFT tower systems CBP can deploy on the Nation’s land from 15 to 10. This reduction mitigated the funding shortfall that had put the program at risk of not achieving full operational capability (FOC) in September 2020. In February 2019, CBP declared a schedule breach of the program’s current acquisition program baseline (APB) as a result of delays in the negotiations with the Tohono O’odham Nation regarding access to tribal lands to construct towers and deploy IFT systems in the Ajo and Casa Grande AoRs. CBP subsequently reached an agreement with the Nation in March 2019. DHS leadership directed the program to revise its APB to reflect changes in tower deployments. CBP officials told GAO they submitted a revised APB to DHS leadership in June 2019, but as of September 2019 it had not yet been approved. CBP officials anticipate the program’s FOC date will slip to March 2021 as a result of these actions. In June 2019, the program updated its life-cycle cost estimate (LCCE) to inform the budget process. The updated LCCE includes estimated costs for camera upgrades and accounts for the reduction in IFT systems. CBP completed deployments in the Sonoita AoR in October 2017 and replaced legacy systems in the Tucson and Ajo AoRs in September 2018 and December 2018, respectively. In January 2015, Border Patrol requested the program prioritize replacing these legacy systems because the technology was obsolete and more expensive to maintain than IFT technology planned for deployment in other AoRs. 10/15 Initial operational capability (Nogales) Customs and Border Protection (CBP) INTEGRATED FIXED TOWERS (IFT) Previously, the OTA found that the program met only 2 of its 3 KPPs and experienced five operational deficiencies during a limited user test conducted in the Nogales AoR in November 2015. However, program and Border Patrol officials did not concur with several of the test results and reported deficiencies with the testing. DHS’s Director, Office of Test and Evaluation did not conduct a formal assessment of the test results because full deployment of the IFT program had already been authorized. Program officials do not plan to conduct additional testing at this time because the program does not have any new requirements. Program officials also stated that if requirements were added, the program would need to conduct additional testing. When CBP initiated the IFT program, it decided to procure a non-developmental system, and it required that prospective contractors demonstrate their systems prior to CBP awarding the contract. The program awarded the contract to EFW, Inc. in February 2014, but the award was protested. GAO sustained the protest and CBP reevaluated the offerors’ proposals before it decided to re-award the contract to EFW, Inc. As a result, EFW, Inc. could not initiate work at the deployment sites until fiscal year 2015. According to CBP officials, the number of IFT towers deployed to a single AoR is subject to change based on Border Patrol assessments. Border Patrol was briefed and approved the reduction of towers within tribal lands. To mitigate capability gaps resulting from the tower reduction, Border Patrol requested the program deploy two additional IFT camera suites in Ajo. DHS leadership directed CBP to develop a border technology plan that includes IFT capabilities. According to CBP officials, the plan calls for an additional 11 AoRs and 35 IFTs. Although the program has not yet received funding for expansion to the 11 AoRs, in September 2018, CBP officials stated they began updating acquisition documents. CBP officials also stated the program does not have a staffing gap, but will require additional staff if funding for the expansion to the 11 AoRs is received. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. MEDIUM LIFT HELICOPTER (UH-60) CUSTOMS AND BORDER PROTECTION (CBP) UH-60 is a medium-lift helicopter that CBP uses for law enforcement and border security operations, air and mobility support and transport, search and rescue, and other missions. CBP’s UH-60 fleet consists of 20 aircraft acquired from the U.S. Army in three different models. CBP previously acquired 4 modern UH-60M aircraft and converted 6 of its older 16 UH-60A aircraft into more capable UH-60L models. CBP is replacing the remaining 10 UH-60A with reconfigured Army HH-60L aircraft. Flight acceptance testing for the first reconfigured aircraft completed in February 2018. Program is assessing additional medium lift capability requirements. GAO last reported on this program in May 2018 (GAO-18-339SP). In July 2018, Department of Homeland Security (DHS) leadership granted the program acquisition decision event (ADE) 3 approval and approved the replacement of CBP’s remaining UH-60A aircraft for reconfigured Army HH-60L aircraft. CBP will begin replacing its UH-60A model aircraft on a one-to-one basis as the reconfigured Army HH-60Ls are delivered. DHS leadership previously approved the transfer of three reconfigured HH-60Ls. According to CBP officials, the ADE 3 approval to replace the remaining seven aircraft was based on the evaluation of an initial reconfigured prototype, which was delivered in 2018. CBP officials anticipate that the second and third reconfigured HH-60Ls will be delivered in fiscal year 2020. The program re-baselined as part of the ADE 3 approval process, removing it from breach status. The program previously experienced cost increases after accommodating a change in DHS’s appropriations structure and schedule slips because of a directive from DHS to develop a comprehensive border plan, which contributed to delays in getting approvals for some of the documents required for ADE 3. The program also anticipated delays in delivery for the second reconfigured HH-60L because of a redesign to be compliant with federal aviation regulations. DHS leadership and CBP officials determined that the effect of the schedule breach was minimal because the program was able to adjust its schedule so that the second and third reconfigured HH-60Ls can be accepted concurrently. The program still plans to achieve full operational capability (FOC) in September 2022 once all 10 of the reconfigured HH-60L aircraft are accepted and deployed. The program updated its life-cycle cost estimate (LCCE) to inform the program’s revised acquisition program baseline (APB). The program’s acquisition cost thresholds increased by nearly $100 million, and the operations and maintenance (O&M) cost thresholds decreased by approximately $15 million. These changes reflect updates to aircraft operational hours and the results of the Army’s annual obsolescence study, among other things. The updated LCCE also removes personnel costs included in the program’s initial APB, which CBP officials previously told GAO are funded through a separate, central funding account for all of CBP’s air and marine assets. Customs and Border Protection (CBP) MEDIUM LIFT HELICOPTER (UH-60) CBP determined that the converted UH-60L and UH-60M aircraft met all five of the program’s key performance parameters (KPP) through operational test and evaluation (OT&E) conducted in fiscal years 2012 and 2014. However, DHS’s Director, Office of Test and Evaluation (DOT&E) did not validate these results because UH-60 was not considered a major acquisition when the tests were conducted. In January 2016, DHS leadership directed the program to conduct acceptance functional flight checks on a reconfigured HH-60L prototype prior to receiving approval to proceed with the remaining replacements. This testing concluded in February 2018. Testers rated the aircraft’s performance, handling, and systems integration as excellent, but found a deficiency in the intercom system. The Army designed a fix that is being incorporated into the second and third reconfigured HH-60L aircraft and will be retrofitted into the prototype. CBP does not plan to conduct formal OT&E on the reconfigured HH-60L because, according to CBP officials, the aircraft has minimal differences from the converted UH-60L aircraft that was previously tested. CBP officials also stated that the program has been able to leverage Army test data, which reduced the risk and testing costs associated with the program. These officials noted that CBP plans to conduct additional testing on the second reconfigured HH-60L to verify design changes and that CBP pilots will perform additional inspections prior to accepting all future aircraft. In July 2018, DHS leadership directed CBP to address requirements for additional medium-lift capability, including coordinating with Department of Defense and DHS stakeholders, such as the U.S. Coast Guard, that also maintain a fleet of H-60 aircraft. CBP officials stated a desire to replace its other medium lift helicopters as they are retired from the fleet with additional reconfigured HH-60L aircraft. This would not increase the overall number of medium lift helicopters, but would increase the number of UH-60 aircraft. If the number of UH-60 aircraft increases, the program will need to seek approval from DHS and extend its FOC date. In April 2019, CBP updated its interagency agreement with the Army to support completing the program’s currently approved quantity. According to CBP officials, this agreement could support acquiring additional reconfigured HH-60Ls if approved by DHS. CBP previously acquired UH-60 as a part of its Strategic Air and Marine Program (StAMP). In July 2016, DHS leadership designated UH-60 as a separate and distinct major acquisition program. In October 2018, CBP officials told GAO they continue to maintain a consolidated program office where the same staff from StAMP support all remaining acquisitions, including UH-60. CBP officials said they have refined the program’s staffing profile and taken steps to mitigate the gap. For example, in June 2019, CBP officials said they had hired four new employees and established a memorandum of agreement with CBP’s Office of Acquisition for matrixed support to assist with developing acquisition documents, as needed. CBP officials stated that as of August 2019, DHS’s Joint Requirements Council validated a requirement for 35 total Medium Lift Helicopters, and the program office is working on a strategy to achieve that inventory target. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. MULTI-ROLE ENFORCEMENT AIRCRAFT (MEA) CUSTOMS AND BORDER PROTECTION (CBP) MEA are fixed-wing, multi-engine aircraft that can be configured to perform multiple missions including maritime, air, and land interdiction, as well as signals detection to support law enforcement. The maritime and air interdiction MEA are equipped with search radar and an electro-optical/infrared sensor to support maritime surveillance and airborne tracking missions. MEA will replace CBP’s fleet of aging C-12, PA-42, and BE-20 aircraft. Air interdiction configuration is operationally effective and suitable with limitations; cyber testing is not complete. Program developing requirements for next configuration; pursuing total of 38 MEA. GAO last reported on this program in May 2018 (GAO-18- 339SP). In February 2019, Department of Homeland Security (DHS) leadership approved a revised acquisition program baseline (APB), which increased the program’s quantity to 29 MEA: 16 previously approved maritime interdiction MEA and 13 additional air interdiction MEA. CBP officials told GAO they also requested approval to acquire all remaining air interdiction MEA. However, in April 2019, DHS leadership directed CBP to complete follow-on operational test and evaluation (OT&E) of the air interdiction configuration and undergo an acquisition decision event (ADE) 3 review before the program could receive full-rate production approval. DHS leadership previously approved CBP’s request to procure additional aircraft in the air interdiction configuration that exceeded the program’s initial baseline of 16 MEA. Specifically, DHS leadership approved procurement of MEA 17 in September 2017 after congressional conferees agreed to an additional aircraft beyond DHS’s budget request. In addition, DHS leadership approved MEA 18-20 in August 2018. CBP officials told GAO it was necessary to procure additional MEA to maintain the production schedule for already ordered aircraft. CBP officials accepted delivery of MEA 16 in February 2019—completing delivery of all maritime interdiction configured MEA. CBP officials said the program experienced a few months delay in delivery of MEA 13-16 because the contractor began laying off staff prior to the program receiving DHS leadership approval to acquire MEA 18-20. According to CBP officials, the program will need to receive ADE 3 approval to procure the remaining air interdiction MEA before the end of September 2019 to avoid future production issues. The program’s revised APB extends the program’s full operational capability (FOC) date by nearly 7 years, to account for the production and delivery of the air interdiction aircraft. The program updated its life-cycle cost estimate (LCCE) in September 2018 to inform its revised baseline. This estimate decreased by approximately $1.4 billion from the program’s previous LCCE due to a reduction in the number of total aircraft—from the program’s proposed end state of 38 MEA to the 29 included in its revised APB—and planned flight hours. Customs and Border Protection (CBP) MULTI-ROLE ENFORCEMENT AIRCRAFT (MEA) The program previously met all five of its key performance parameters (KPP) for the maritime interdiction configuration. The program established two additional KPPs for the air interdiction configuration related to radar detection. According to CBP officials, the only difference between the maritime and air interdiction configurations is the radar software. The MEA’s new mission system processor was tested in July 2015 on the maritime interdiction configuration. During the first phase of follow-on OT&E, the program met the two air interdiction KPPs. In August 2019, DHS’s Director, Office of Test and Evaluation (DOT&E) assessed the results and found the air interdiction radar software to be operationally effective but operationally suitable with limitations primarily because of a lack of spare parts, which affects the mission readiness of the MEA fleet. DOT&E recommended that the program develop a maintenance program to better track failure rates and project spare requirements, purchase spares at the level necessary to support the fleet, and complete OT&E of cyber resilience, among other things. In April 2016, CBP identified capability needs in three additional mission areas and proposed increasing the program’s total to 38 MEA by adding 13 air (reflected in the February 2019 APB), six land interdiction MEA, and three signals detection MEA. The Joint Requirements Council endorsed CBP’s findings, but recommended CBP develop a number of requirements documents—including an operational requirements document (ORD)—to fully validate the findings. In June 2019, CBP officials said they had begun developing requirements for the land interdiction MEA—the next configuration the program plans to pursue. CBP previously acquired MEA as a part of its Strategic Air and Marine Program (StAMP). In July 2016, DHS leadership designated MEA as a separate and distinct major acquisition program. In October 2018, CBP officials told GAO they continue to maintain a consolidated program office where the same staff from StAMP support all remaining acquisitions, including MEA. CBP officials said they have refined the program’s staffing profile and taken steps to mitigate the gap. For example, in June 2019, CBP officials said they had hired four new employees and established a memorandum of agreement with CBP’s Office of Acquisition for matrixed support to assist with developing acquisition documents, as needed. CBP officials previously told GAO that the staffing gap contributed to delays in developing acquisition documentation for the air interdiction MEA. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The NII Systems Program supports CBP’s interdiction of weapons of mass destruction, contraband such as narcotics, and illegal aliens being smuggled into the United States, while facilitating the flow of legitimate commerce. CBP officers use large- and small-scale NII systems at air, sea, and land ports of entry; border checkpoints; and international mail facilities to examine the contents of containers, railcars, vehicles, baggage, and mail. CBP is evaluating technologies to increase efficiencies and address capability gaps. Staffing challenges pose risk to current program execution and planning for follow-on to NII program. GAO last reported on this program in May 2018 (GAO-18-339SP). The NII Systems program is on track to meet its approved cost and schedule goals. The Consolidated Appropriations Act of 2019 included $570 million of acquisition funding for the NII program—$520 million above the President’s budget level. CBP officials told GAO they plan to use the additional acquisition funding primarily to increase scanning capability at land points of entry along the southwest border by recapitalizing some large-scale capabilities and deploying additional small-scale capabilities. The program updated its life-cycle cost estimate (LCCE) in June 2018. The program’s acquisition costs remain within its acquisition program baseline (APB) cost thresholds and continue to decrease. Compared to the prior year’s estimate, the program’s acquisition costs decreased by $81 million and operations and maintenance increased by $33 million. However, the LCCE update only estimated costs through fiscal year 2026—9 years short of the program’s final year. In June 2019, CBP officials told GAO that they were in the process of updating the program’s LCCE. These officials stated that they plan to extend the LCCE through the program’s final year and adjust program costs based on program changes made in response to the additional funding received. CBP plans to deploy full operational capability (FOC) quantities of 342 large- and 5,455 small-scale NII systems in fiscal year 2020—4 years earlier than the program’s current APB threshold date. In November 2018, Department of Homeland Security (DHS) leadership decided that once FOC quantities for large and small-scale systems are deployed, CBP will initiate a transfer of the NII program to the operational activity for sustainment efforts. In addition, once FOC quantities are deployed, DHS leadership determined that CBP may adjust large- and small-scale NII deployment quantities in excess of FOC with similarly capable systems to address changing capacity needs and emerging threats. CBP is assessing requirements to address capability gaps, such as increased throughput. In June 2019, CBP officials reported that some technologies being assessed can be procured through the current NII program because CBP considers them to be similarly capable systems. However, these officials also told GAO that CBP is developing acquisition documents to inform a follow-on NII program for other technologies. Customs and Border Protection (CBP) CBP officials are coordinating with DHS’s Science and Technology Directorate to evaluate technologies and concepts of operation to increase efficiencies and address capability gaps. CBP officials said that they will incorporate these solutions in a new acquisition program as a follow-on to NII. The NII Systems program is developing a technology demonstration plan to detail how pilot project demonstrations will inform decisions regarding future acquisitions of NII systems technology. CBP is in the process of assessing requirements to inform the follow-on NII program. In March 2017, the Joint Requirements Council (JRC) validated a capability analysis report (CAR) that assessed capability gaps in NII operations to assist with identifying potential upgrades to existing systems and developing requirements for future systems. DHS leadership approved a new NII Mission Needs Statement (MNS) in August 2018, which updated the capability gaps identified in the CAR and described mission needs and capabilities to address the gaps. The JRC endorsed the MNS, but recommended that CBP address cybersecurity threats and vulnerabilities as requirements and solutions evolve, and also include the Transportation Security Administration—which leverages some of the same equipment to perform their mission—in defining requirements, among other things. CBP officials told GAO that they are developing acquisition documentation to inform acquisition decision event 1 for the follow-on NII program planned for September 2019, including a concept of operations and an initial cost estimate. CBP’s ability to successfully execute the existing NII Systems program and plan for future efforts may be at risk because of understaffing. As of September 2019, the program continued to face a staffing gap of approximately 21 percent. CBP officials said that they plan to mitigate the gap with government personnel from other offices within the component and with contractor support. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. REMOTE VIDEO SURVEILLANCE SYSTEM (RVSS) CUSTOMS AND BORDER PROTECTION (CBP) RVSS helps the Border Patrol detect, track, identify, and classify illegal entries across U.S. borders. RVSS consists of daylight and infrared video cameras mounted on towers and buildings with communications systems that link to command and control centers. From 1995 to 2005, CBP deployed approximately 310 RVSS towers along the U.S. northern and southern borders, and initiated efforts to upgrade legacy RVSS towers in Arizona in 2011. Diesel generators that power relocatable towers cause vibrations that could impact mission operations. Once funded, program plans to award a contract for additional deployments along the southwest border. GAO last reported on this program in May 2018 and November 2017 (GAO-18-339SP, GAO-18-119). In April 2016, Department of Homeland Security (DHS) leadership elevated RVSS from a level 3 program—which focused on upgrading legacy RVSS in Arizona—to a level 1 program after approving CBP’s plan to expand deployments to the Rio Grande Valley (RGV) sector and adding an additional 6 sectors along the southwest border—Laredo, Del Rio, Big Bend, El Paso, El Centro, and San Diego. DHS leadership approved the program to move forward with deployments at two Border Patrol stations within the RGV, which can be completed as options under the program’s existing contract, if exercised. However, DHS leadership also directed the program to re-baseline to account for its expanded scope and conduct an acquisition decision event (ADE) 2A to obtain approval for additional deployments. CBP officials previously told GAO the program anticipated conducting its ADE 2A and obtaining DHS leadership approval for an acquisition program baseline (APB) establishing cost, schedule and performance goals for the expanded program by December 2018. As of September 2019, the program had not yet received approval for key acquisition documents to conduct ADE 2A, including the APB, but CBP officials anticipate approval of these documents by March 2020. CBP officials primarily attribute these delays to a lack of funding for the additional deployments. CBP officials said the upcoming APB will include only deployments to Arizona and the RGV sector to align with funding received. Future deployments will require additional APB updates, which CBP officials said would be developed as funding becomes available. In June 2019, the program updated its life-cycle cost estimate (LCCE) to inform the budget process. The updated LCCE included the expansion to the 6 sectors along the southwest border, relocatable RVSS towers, and operations and maintenance costs for previously fielded systems. However, CBP officials told GAO the LCCE is in the process of another update, which will inform the upcoming APB and include the expansion across additional sectors across southwest border and upgrades to legacy RVSS towers. Customs and Border Protection (CBP) REMOTE VIDEO SURVEILLANCE SYSTEM (RVSS) CBP completed a pilot of five relocatable RVSS towers in June 2018, which included a comparison of vibration data measured on camera mounts for relocatable towers and fixed towers. The assessment showed that diesel generators used to recharge batteries in the relocatable towers caused significant vibrations, which caused cameras to shake and can affect operators’ ability to execute the mission. To address the issues stemming from the vibrations, CBP officials said they have connected the five relocatable towers to grid power when they are in use and plan to require solar power sources for future relocatable towers. In July 2013, CBP awarded a firm fixed-price contract for a commercially available, non-developmental system. This contract covered the program’s initial scope to deploy upgraded RVSS in Arizona and included options for some initial work within the RGV sector. According to CBP officials, the program will need to award a new contract to cover expansion to the remaining six sectors along the southwest border. CBP officials drafted the request for proposals for the new contract, but it cannot be released until funding is received. CBP officials said the program is experiencing challenges in the RGV sector related to land acquisition. The U.S. Army Corps of Engineers is leading efforts to acquire land for RVSS and other border security programs, including the Border Wall System Program (BWSP). CBP officials told GAO that the RVSS program is coordinating with BWSP on its planned deployments within the RGV sector. Program officials anticipate that some RVSS towers will be co-located within the border wall. In the interim, CBP officials said the program is using short-term agreements with landowners to place relocatable towers in areas where border wall construction is planned. These officials reported that the short-term agreements provide flexibility for the placement of towers and can be completed more quickly than permanent agreements. CBP officials stated that the program’s current staffing plan was based on receiving funding for the expansion to RGV. Program officials said they will address the staffing needs once additional funding is received, but current operations have not been affected by the staffing gap. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) The TACCOM program is intended to upgrade land mobile radio infrastructure and equipment to support approximately 95,000 users at CBP and other federal agencies. It is replacing obsolete radio systems with modern digital systems across various sectors located in 19 different service areas, linking these service areas to one another through a nationwide network, and building new communications sites to expand coverage in five of the 19 service areas. CBP officials reported that prior software issues have been addressed. Program continues to face staffing challenges due to competition from the private sector, among other things. GAO last reported on this program in May 2018 (GAO-18-339SP). In September 2018, the TACCOM program achieved full operational capability (FOC)—nine months later than initially planned. However, in July 2018, the program’s operational test authority (OTA) conducted a survey of end users and concluded that there were still large gaps in coverage the TACCOM capabilities were intended to address. CBP officials stated that limited funding has affected the program’s ability to address the remaining gaps in coverage. Department of Homeland Security (DHS) leadership previously approved a re-baseline of the TACCOM program in November 2017 after it experienced a schedule slip and cost growth. In July 2017, CBP officials notified DHS leadership that the program would not achieve FOC as planned due to issues related to federal information security requirements. In addition, the program experienced cost growth as a result of increased contractor labor costs and support for facilities and infrastructure. In November 2017, DHS’s Chief Financial Officer (CFO) approved the program’s revised life-cycle cost estimate (LCCE). At that time, DHS‘s CFO noted that the program’s estimate exceeded its available funding and requested that the program address the affordability gap before it was re-baselined. Nevertheless, DHS leadership approved the program’s revised acquisition program baseline (APB). CBP officials subsequently identified errors in the approved APB cost threshold tables and provided revised amounts, which are presented here. In September 2018, program officials told GAO that they completed an affordability analysis and submitted it to CBP and DHS leadership. CBP officials reported that the funding the program received in 2018 and carryover funds from prior years decreased the program’s affordability gap. However, CBP reported that in future years, funding gaps will require the program to reduce operations and maintenance requirements to match the appropriated funding and will continue to limit the program’s ability to address coverage gaps. Customs and Border Protection (CBP) In May 2014, DHS’s Director, Office of Test and Evaluation determined that the TACCOM systems were operationally effective, but test data were insufficient to determine operational suitability. The program’s OTA subsequently found that the TACCOM systems were operationally effective and suitable based on the results of an operational assessment (OA) completed in June 2016. CBP officials told GAO that in January 2018, the program moved from a mission support office to a joint program office under Border Patrol as part of CBP’s reorganization. The goal of this move was to make CBP land mobile radio capabilities seamless by combining the mission critical voice functions within Air and Marine Operations, the Border Patrol, and the Office of Field Operations—the TACCOM program’s primary customers—under one organizational leader: the Border Patrol Chief. In September 2018, CBP officials told GAO that the program reorganized staff within the program as it transitioned to an office under Border Patrol. CBP officials reported that hiring and retaining qualified land mobile radio engineers and information technology technical staff is a challenge because of lengthy hiring timeframes and competition with the private sector. CBP officials stated that the TACCOM upgrades improved interoperability, coverage, capacity, reliability and encryption to provide critical communications support to the agents and officers who secure the Nation’s borders. The program continues to provide LMR System Maintenance to include operation, sustainment and performance monitoring to ensure reliable and consistent border protection communications. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CUSTOMS AND BORDER PROTECTION (CBP) TECS (not an acronym) is a law-enforcement information system that has been in place since the 1980s and that helps CBP officials determine the admissibility of persons entering the United States at border crossings, ports of entry, and prescreening sites located abroad. CBP initiated efforts to modernize TECS to provide users with enhanced capabilities for accessing and managing data. Costs increased by $400 million in revised cost estimate due to extended sustainment timeframe. CBP working to address and prevent major system outages. GAO last reported on this program in May 2018 (GAO-18-339SP). Not included Department of Homeland Security (DHS) leadership approved the fourth version of the program’s acquisition program baseline (APB) in July 2016. In this APB, CBP split full operational capability (FOC) into two separate operational capability milestones to better reflect the program’s activities at its primary and secondary data centers. CBP delivered operational capability at the primary data center and transitioned all remaining TECS users to the modernized system in December 2016. CBP delivered operational capability at the secondary data center in June 2017—as scheduled. This data center provides redundant TECS access to minimize downtime during system maintenance or unscheduled outages. However, not all test results were available in time for the program’s acquisition decision event (ADE) 3 decision. In August 2017, DHS leadership directed CBP to conduct follow-on operational test and evaluation (OT&E) activities to address known issues and conduct cybersecurity OT&E. The program completed follow-on OT&E in October 2018. DHS’s Director, Office of Test and Evaluation (DOT&E) completed an assessment of the test results in June 2019—which is intended to inform acquisition decisions. In June 2019, the program’s annual life-cycle cost estimate (LCCE) was updated in accordance with DHS’s guidance to include operations and maintenance (O&M) costs for 10 years past the program’s planned FOC date. The updated LCCE includes program costs through fiscal year 2028—7 years longer than the prior LCCE and the program’s current APB cost goals. However, the LCCE update does not include estimated costs for all program plans, such as migrating the data centers to a cloud infrastructure. CBP officials plan to incorporate these costs into future LCCE updates when requirements are better defined. The program’s O&M costs increased and exceeded the program’s APB O&M cost threshold by approximately $400 million. DHS officials stated that the additional O&M costs do not constitute a cost breach because the program is considered to be in O&M phase of the acquisition life cycle. Customs and Border Protection (CBP) DOT&E found similar results for operational effectiveness and operational suitability during OT&E in July 2017, but tests were not adequate to assess operational cybersecurity. The test results validated that the program had met all eight of its key performance parameters (KPP), but the test team identified several deficiencies related to mission support. In response, DOT&E recommended that CBP conduct a threat assessment, threat-based cybersecurity operational testing, and follow-on OT&E. DHS leadership directed the program to complete these actions by February 2018, but this testing was not completed until October 2018. CBP officials attributed the delays to a lack of understanding of the level of effort required to draft the OT&E plan and supporting documents. Since the program has completed development, CBP is focused on ensuring that the modernized TECS system works as intended by addressing operational issues as they are identified. For example, in January 2017, TECS Modernization experienced a major outage that resulted in airport delays. CBP officials previously said that they continually monitor system health through a 24/7 operations center and have established a group dedicated to address system issues. In November 2017, DHS’s Office of Inspector General (OIG) found that CBP took sufficient steps to resolve the January 2017 outage, but underlying issues could result in future outages, including inadequate software capacity testing and deficient software maintenance. The OIG made five recommendations for CBP to implement improvements. CBP concurred with four of the recommendations but did not concur with a recommendation regarding CBP’s need to ensure staff make timely notifications of critical vulnerabilities to operating systems. CBP reported that the program’s notification activities were within DHS’s vulnerability management policy windows for testing and deploying software patches that were not deemed critical. Further, in September 2017, the DHS OIG found that nearly 100 outages, periods of latency, or instances of degraded service, were reported for TECS Modernization applications between June 2016 and March 2017, and recommended that CBP develop a plan to address factors that contributed to these challenges. CBP concurred with the recommendations. CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CONTINUOUS DIAGNOSTICS AND MITIGATION (CDM) CYBERSECURITY AND INFRASTRUCTURE SECURITY AGENCY (CISA) The CDM program aims to strengthen cybersecurity of the federal government’s networks by continually monitoring and reporting vulnerabilities at more than 65 civilian agencies. CDM provides four capabilities: Asset Management reports vulnerabilities in hardware and software; Identity and Access Management focuses on user access controls; Network Security Management will report on efforts to prevent attacks; and Data Protection Management will provide encryption to protect network data. Program revised its key performance parameters to better align with cybersecurity standards. The program began using a new contract vehicle and is hiring additional staff to support new capabilities. GAO last reported on this program in May 2018 (GAO-18-339SP). According to CISA officials, as a result of the 2019 partial government shutdown, the program experienced delays that impacted the program’s ability to achieve initial operational capability (IOC) for Identity and Access Management and Network Security Management capabilities as planned. In response, Department of Homeland Security (DHS) leadership approved a 3-month extension to both milestones. As a result, the IOC threshold date for Identity and Access Management capabilities was extended to and later achieved in June 2019. The IOC threshold date for Network Security Management was extended to December 2019. The program updated its life-cycle cost estimate (LCCE) in April 2019 to inform the budget process. This estimate exceeds the program’s current operations and maintenance (O&M) and total life-cycle cost thresholds by approximately $300 million and $100 million, respectively. The program’s cost increase is primarily attributed to evolving requirements described in the explanatory statements accompanying recent Appropriations Acts and the Office of Management and Budget (OMB). Specifically, CISA officials said the program received $110 million above the Presidential Budget Request and noted this was to accelerate procurement of CDM capabilities for additional agencies not in the original program scope and accelerate mobile and cloud computing visibility across the .gov domain, among other things. In addition, the program received funding in 2018 and 2019 after OMB directed that the CDM program cover certain costs of sustaining licenses for supported agencies, which CISA officials estimate will cost the program an additional $62 million. The program also estimates that O&M costs for these additional requirements will require a total of an additional $79 million in future years. In May 2019, CISA officials said the program is updating key acquisition documentation, such as its acquisition program baseline (APB) and LCCE, to inform acquisition decision event (ADE) 2B for Data Management Protection capabilities. They noted that the updated acquisition documents will account for the increased demand for CDM services. The program previously planned to achieve this ADE 2B by March 2019. However, due in part to the partial government shutdown, the program now plans to achieve the ADE 2B in 2020. Cybersecurity and Infrastructure Security Agency (CISA) CONTINUOUS DIAGNOSTICS AND MITIGATION (CDM) The CDM program is only authorized to conduct testing on DHS networks, which means the other departments and agencies are responsible for testing the CDM tools on their own networks. CISA officials reported that four other agencies have either conducted or plan to conduct operational studies, which provided the program with informal observations on implementation and was used to support IOC for the Identity and Access Management capability. Under the program’s revised test and evaluation master plan, the OTA plans to perform operational assessments (OA) on DHS’s network to incrementally demonstrate each capability as it is deployed and to reduce risk prior to conducting formal program-level operational test and evaluation. Specifically, the program completed an OA for the Identity and Access Management capability and expected the letter of assessment from DOT&E by June 2019. In addition, the program expects to begin a technology assessment for the Data Protection Management capability by September 2019. The CDM program updated its acquisition plan to reflect a change in strategy for procuring CDM tools and services. Previously, the program used blanket purchase agreements established by the General Services Administration (GSA) Federal Supply Schedule. CISA officials told GAO that in February 2018 the program began using an existing GSA government-wide acquisition contract and as of August 2019, the program has awarded 5 of 6 planned task orders to obtain CDM tools and services on behalf of participating agencies. According to CISA officials, the new acquisition strategy is intended to provide greater flexibility in contracting for current capabilities and to support future capabilities. Participating agencies will also be able to order additional CDM-approved products or services from GSA’s schedule for information technology equipment, software, and services. The program previously used the term “phases” and renamed the phases in the fall of 2018 to align with the associated capabilities it deploys. CISA officials explained that a phased deployment implied a serial implementation; however, CDM capabilities can be deployed in parallel. The program is not currently experiencing workforce challenges. The program received approval for 29 new positions to address staffing needs for the Network Security Management and Data Protection Management capabilities. Officials plan to fill those positions in fiscal years 2019 and 2020. CISA officials stated that in addition to efforts identified in this assessment, the program continues to manage its budget to ensure program costs match available funding and is leveraging the collective buying power of federal agencies and strategic sourcing to continue achieving government cost savings on CDM products. CISA officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. NATIONAL CYBERSECURITY PROTECTION SYSTEM (NCPS) CYBERSECURITY AND INFRASTRUCTURE SECURITY AGENCY (CISA) NCPS is intended to defend the federal civilian government from cyber threats. NCPS develops and delivers capabilities through a series of “blocks.” Blocks 1.0, 2.0, and 2.1 are fully deployed and provide intrusion-detection and analytic capabilities across the government. The NCPS program is currently deploying EINSTEIN 3 Accelerated (E3A) to provide intrusion-prevention capabilities and plans to deliver block 2.2 to improve information sharing across agencies. Program capabilities determined to be operationally suitable, effective, and cyber resilient with limitations. Staffing challenges may impact program execution. GAO last reported on this program in May 2018 (GAO-18-339SP). In February 2018, the Department of Homeland Security’s (DHS) Under Secretary for Management (USM) granted NCPS acquisition decision event (ADE) 3 approval for E3A to transition to sustainment and ADE 2C approval for block 2.2 to deploy additional capabilities. DHS’s USM also directed NCPS to address several issues identified during test events that informed the ADEs, including the following: For EA—Conduct follow-on operational test and evaluation (OT&E) by March For block 2.2—Review the operational requirements document (ORD) and concept 2019 to assess cybersecurity, among other things. of operations (CONOPS) to ensure they accurately reflect the mission environment and processes, review current and planned capabilities to ensure they will adequately address the ORD and CONOPS, and conduct another operational assessment (OA) prior to initial OT&E. The program revised its acquisition program baseline (APB) in January 2018 in preparation for the ADEs. However, the program updated its APB again in October 2018 to address an error found in the life-cycle cost estimate (LCCE), to add an additional 2 years of program costs, and to revise the approach to estimating threshold costs. Specifically, the LCCE that provided the basis for the program’s APB cost goals did not accurately account for the program’s sunk costs. Once corrected, the program’s total life-cycle cost threshold was $5.9 billion—more than $1.7 billion more than in the program’s January 2018 APB. CISA officials reported that while correcting the sunk costs increased the APB cost goals, the change did not affect estimating future costs and, therefore, will not impact program affordability. In March 2019, to inform the budget process, the program updated its corrected LCCE—which is within its current APB cost goals. In the program’s January 2018 APB, the ADE 3 date for block 2.2 slipped by 2 years— from March 2019 to March 2021—compared to its prior APB. According to CISA officials, this milestone was revised due to bid-protest-related delays involving the award of the program’s development, operations, and maintenance contract. CISA officials said that due to several protests, the award was delayed until June 2018— nearly 3 years later than planned. Cybersecurity and Infrastructure Security Agency (CISA) NATIONAL CYBERSECURITY PROTECTION SYSTEM (NCPS) A, which included an assessment of cyber resilience for only one of the program’s three internet service providers. In June 2019, DOT&E determined EA effectiveness by integrating automated information sharing solutions and data analysis tools, among other things. In June 2019, CISA officials stated they were working on enhancements to address E In January 2018, DOT&E determined that it was too soon to assess block 2.2 based on the OA results from October 2017, but noted block 2.2 was at risk of not meeting user needs and made a number of recommendations, including reviewing the ORD and CONOPS and repeating the OA before conducting initial OT&E. CISA officials told GAO that the operator’s processes had changed since the initial ORD and CONOPS were approved. These officials said they plan to revise these documents before conducting another OA in fiscal year 2020. A intrusion-prevention capabilities have been primarily provided through sole source contracts with internet service providers and a contract to provide basic intrusion-prevention services. In December 2015, Congress required DHS to make available for use by federal civilian agencies, certain capabilities, such as those provided by NCPS’s EA at approximately 93 percent of federal civilian agencies and departments and, in October 2018, CISA officials reported that NCPS was up to 95 percent, with mainly small and micro organizations remaining. CISA officials said they are working with the various agencies to migrate agency email to a cloud environment, but each department and agency requires a unique solution and coordination can be a challenge. In April 2019, CISA officials reported that if the program’s staffing gap is not addressed, the program may experience a delay in meeting mission requirements. CISA officials told GAO that the federal hiring process and DHS’s lengthy suitability screening process have made recruitment efforts challenging because qualified candidates often find other employment while waiting for these processes to be completed. In addition, CISA officials anticipate workforce challenges if, in the future, they are not able to use compensation flexibility for cybersecurity specialists. CISA officials reviewed a draft of this assessment and provided no comments. NEXT GENERATION NETWORKS PRIORITY SERVICES (NGN-PS) CYBERSECURITY AND INFRASTRUCTURE SECURITY AGENCY (CISA) NGN-PS is intended to address an emerging capability gap in the government’s emergency telecommunications service, which prioritizes phone calls for select officials when networks are overwhelmed. CISA executes NGN-PS through commercial telecommunications service providers, which addresses the government’s requirements, as they modernize their own networks. Full operational capability for wireless capabilities delayed by 3 years to incorporate design changes in network. New program for acquisition of data and video capabilities to begin in fiscal year 2020. GAO last reported on this program in May 2018 (GAO-18-339SP). The NGN-PS program is developing and delivering prioritized voice capability in three increments: increment 1 maintains current priority service on long distance calls as commercial service providers update their networks; increment 2 delivers wireless capabilities; and increment 3 is intended to address landline capabilities. In October 2018, Department of Homeland Security (DHS) leadership granted the NGN-PS program acquisition decision event (ADE) 3 for increment 1. At that time, the program also declared full operational capability (FOC) for increment 1. Once operational, capabilities acquired by NGN-PS are transferred to CISA’s Priority Telecommunications Service program. In April 2018, DHS leadership approved a revised acquisition program baseline (APB) for NGN-PS and subsequently authorized the program to initiate development of increment 3. The previous APB included only costs and schedule milestones associated with increments 1 and 2. The revised APB modified the program’s cost and schedule goals to include goals for increment 3 and updates to cost goals previously established for increments 1 and 2. Specifically, the program’s total acquisition cost threshold increased by $68 million. This change reflects $144 million in additional costs to develop landline capabilities and a cost savings of approximately $100 million on previous increments, among other things. Program officials primarily attributed the cost savings on increment 1 to design changes implemented by a commercial service provider within its network. In addition, according to program officials, the increment 2 FOC goal was revised in the updated APB to allow additional time for a commercial service provider to incorporate design changes into its network. As a result, the FOC date for increment 2 slipped 3 years to December 2022. The program plans to achieve FOC for increment 3 in December 2025. The program updated its life-cycle cost estimate (LCCE) in February 2019. The updated LCCE includes operations and maintenance (O&M) costs, although the APB does not. Officials said this is not considered a breach because the O&M costs include staffing outside of O&M phase activities. Cybersecurity and Infrastructure Security Agency (CISA) NEXT GENERATION NETWORKS PRIORITY SERVICES (NGN-PS) NGN-PS capabilities are evaluated through developmental testing and operational assessments conducted by service providers on their own networks. CISA officials review the service providers’ test plans, oversee tests to verify testing procedures are followed, and approve test results to determine when testing is complete. The OTA then leverages the service providers’ test and actual operational data to assess program performance. In addition, CISA officials said that they continuously review actual NGN-PS performance and service providers undergo annual network service verification testing under the Priority Telecommunications Service program. In October 2018, DHS leadership approved the separation of the development of capabilities for data and video priority services into a new acquisition program. DHS leadership approved the decision because data and video capabilities are different than landline priority, and the addition of these capabilities would significantly extend the expected end date of the NGN-PS program. CISA officials anticipate establishing a preliminary baseline for the data and video capabilities in early fiscal year 2020. NGN-PS was established in response to an Executive Order requiring the federal government to have the ability to communicate at all times during all circumstances to address national security issues and manage emergencies. A Presidential Policy Directive issued in July 2016 superseded previous directives requiring continuous communication services for select government officials. According to CISA officials, the new directive validates requirements for the voice phase and was used to develop requirements for the data and video phase. In May 2019, the program reported four critical staffing vacancies, including two new positions. The program reported that it continues to have difficulty filling a systems engineer billet, which program officials attribute to the lengthy federal hiring process, DHS’s suitability screening process, and the fiscal year 2019 partial government shutdown. To mitigate the impact of the staffing gap on program execution, the program leverages contract support and staff from the Priority Telecommunications Service program. In addition to activities identified in this assessment, CISA officials stated that the program will continue planning for data and video priority in future budget years. CISA officials also said that service providers undergo annual network service verification testing and that the program is currently making progress in hiring for numerous positions. CISA officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. HOMELAND ADVANCED RECOGNITION TECHNOLOGY (HART) HART will replace and modernize DHS’s legacy biometric identification system—known as IDENT—which shares information on foreign nationals with U.S. government and foreign partners to facilitate legitimate travel, trade, and immigration. The program plans to develop capabilities in four increments: increments 1 and 2 will replace and enhance IDENT functionality; increments 3 and 4 will provide additional biometric services, as well as a web portal and new tools for analysis and reporting. Program updated operational requirements document and revised its key performance parameters. Program is taking steps to address challenges as a result of a shortfall in staff with technical skillsets. GAO last reported on this program in May 2018 (GAO-18-339SP). In May 2019, DHS leadership approved a revised acquisition program baseline (APB) for the HART program, removing it from breach status, after the program experienced a schedule slip in June 2017. Specifically, the HART program declared a schedule breach when officials determined the program would not be able to meet its initial APB milestones. HART officials attributed the schedule slip to multiple delays in awarding the contract for increments 1 and 2 and a subsequent bid protest—which GAO denied. The program initiated work with the contractor in March 2018 and revised key acquisition documents, including its acquisition program baseline (APB) and life-cycle cost estimate (LCCE), to reflect program changes. For example, officials revised these documents to account for schedule delays and the contractor’s solution for enhanced biometric data storage. Specifically, the contractor plans to deliver services using a cloud-based solution rather than through DHS’s data centers. The HART performance work statement shows delivering services through the cloud provides greater flexibility to scale infrastructure supporting services at a lower cost. The program’s initial operational capability (IOC) date—when all customers will transition from using IDENT to HART—slipped 2 years to December 2020. This is a significant challenge because IDENT is at risk of failure and additional investments are necessary to keep the system operational. HART’s full operational capability (FOC) date—when the program plans to deploy enhancements of biometric services and new tools for analysis and reporting—slipped nearly 3 years to June 2024. HART’s total APB cost thresholds decreased by approximately $2 billion, which officials primarily attribute to the less expensive cloud-based solution and removal of IDENT upgrade costs, among other things. However, officials identified a risk that costs associated with the cloud-based solution could increase because technical requirements were not fully developed when the LCCE informing the revised APB was developed. As a result, HART is at risk for a future cost breach once these technical requirements are better defined. The affordability surplus from fiscal years 2020 through 2024 may be overstated because, according to officials, projected funding covers both IDENT and HART. HOMELAND ADVANCED RECOGNITION TECHNOLOGY (HART) The program updated its operational requirements document in May 2019 to support the program’s re-baseline and revised its eight key performance parameters (KPP) to address evolving DHS biometric requirements. Specifically, the KPPs for increment 1 establish requirements for system availability and a fingerprint biometric identification service. The program added a KPP for increment 1 to address fingerprint search accuracy. Increment 2 KPPs establish requirements for multimodal biometric verification services and interoperability with a Department of Justice system. The program adjusted a KPP for multimodal biometric verification to address iris search accuracy. Increments 3 and 4 KPPs establish requirements for web portal response time and reporting capabilities. DHS’s Science and Technology Directorate’s (S&T) Office of Systems Engineering completed a technical assessment on HART in February 2016 and concluded that the program had a moderate overall level of technical risk. In October 2016, DHS leadership directed HART to work with S&T to conduct further analysis. In March 2019, S&T updated risks identified in the technical assessment and evaluated the program’s scalability, availability, cybersecurity, and performance modeling risks for the HART system. S&T made several recommendations for the program to consider as it addresses identified risks. S&T will continue to work with the program to address technical and operational challenges. In April 2019, following the passage of the Cybersecurity and Infrastructure Security Agency (CISA) Act of 2018, the transfer of CISA’s Office of Biometric Identity Management (OBIM)—which includes the HART program—to DHS’s Management Directorate was implemented. The transfer was informed by a working group including OBIM, DHS’s MGMT, and CISA subject matter experts. In June 2019, HART officials told GAO they are currently planning for increments 3 and 4, which will provide new and enhanced capabilities, analytics, and reporting, and additional biometric modalities and services, among other things. In June 2019, HART officials released a request for information for increments 3 and 4, which will inform the program’s acquisition plan and statement of work for a request for proposal. At the direction of DHS leadership, HART program officials coordinated with DHS’s Chief Technology Officer to assess the skills and functions of staff necessary to execute the program. In its August 2019 staffing plan, the program reported workforce risks, including a potential shortfall in staff with technical skillsets; however, officials stated that they are mitigating the shortfall, in part, by providing training activities for current staff. In June 2019, HART officials noted that the federal hiring process and DHS’s lengthy security clearance process have made recruitment efforts challenging. HART officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. LOGISTICS SUPPLY CHAIN MANAGEMENT SYSTEM (LSCMS) FEDERAL EMERGENCY MANAGEMENT AGENCY (FEMA) LSCMS is a computer-based tracking system that FEMA officials use to track shipments during disaster-response efforts. It is largely based on commercial-off- the-shelf software. FEMA initially deployed LSCMS in 2005, and initiated efforts to enhance the system in 2009. According to FEMA officials, LSCMS can identify when a shipment leaves a warehouse and the location of a shipment after it reaches a FEMA staging area near a disaster location. LSCMS found operationally effective and operationally suitable with limitations, but not cyber secure. Program transitioned to cloud data storage and plans to conduct annual cybersecurity testing. GAO last reported on this program in May 2018 (GAO-18- 339SP). In September 2019, Department of Homeland Security (DHS) leadership granted the program approval of acquisition decision event (ADE) 3 and acknowledged the program’s achievement of full operational capability (FOC). DHS leadership previously denied the program’s request for acquisition decision event ADE 3 and FOC approval until issues with the system’s backup server were resolved. Program officials reported that the program addressed these issues in August 2019. In November 2017, DHS leadership approved a revised acquisition program baseline (APB) after the LSCMS program experienced a schedule slip because of the 2017 hurricane season. FEMA officials said the need to deploy LSCMS personnel in support of response and recovery efforts during multiple hurricanes—Harvey, Irma, and Maria—jeopardized the program’s ability to complete all required activities as planned. Specifically, the program was unable to complete follow-on operational test and evaluation (OT&E) to achieve ADE 3 and FOC by its initially planned APB dates of September 2018 and December 2018, respectively. The program was able to retain most of its initial schedule by working with its operational test agent (OTA) to adjust the follow-on OT&E plan, which significantly reduced the scope of dedicated testing needed to complete follow-on OT&E. Specifically, the OTA collected operational data during the 2017 hurricane response efforts, which allowed them to assess approximately two-thirds of the performance measures required for follow-on OT&E. In December 2018, the program updated its life-cycle cost estimate (LCCE), which is within the program’s APB cost thresholds. The program’s operations and maintenance (O&M) costs decreased in part because the program plans to transition LSCMS data storage from a physical facility to a cloud environment. The updated LCCE also estimates costs for conducting technology refreshes annually instead of every 5 years, which FEMA officials said will make the program’s future funding needs more stable as the program moves into sustainment. Federal Emergency Management Agency (FEMA) LOGISTICS SUPPLY CHAIN MANAGEMENT SYSTEM (LSCMS) Officials reported that in August 2019 the program migrated to the cloud—resolving a majority of the program’s cybersecurity issues. Officials reported that remaining system and enterprise issues will be resolved in September 2020, when the program plans to conduct annual cybersecurity testing. The LSCMS program previously experienced significant execution challenges because of prior poor governance. FEMA initially deployed the enhanced LSCMS in 2013 without DHS leadership approval, a DOT&E letter of assessment, or a DHS-approved APB documenting the program’s costs, schedule, and performance parameters, as required by DHS’s acquisition policy. DHS’s Office of Inspector General also found that neither DHS nor FEMA leadership ensured the program office identified all mission needs before selecting a solution. In response, DHS leadership paused all LSCMS development efforts in April 2014 until the program addressed these issues, among others. FEMA subsequently completed an analysis of alternatives and developed an APB based on this assessment. DHS leadership approved the program’s initial APB in December 2015 and authorized FEMA to resume all LSCMS development and acquisition efforts in March 2016. In July 2019, FEMA reported that the program had initiated the hiring process for its vacant positions. In July 2019, FEMA officials told GAO one of the positions had already been filled. According to FEMA officials, the program revised its methodology for completing its most recent staffing profile to reflect the current and future staffing needs of the program. FEMA officials said that the current staffing levels will not change significantly after the program achieves FOC, as there will be a continued need for regular updates to the system. FEMA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. NATIONAL BIO AND AGRO-DEFENSE FACILITY (NBAF) SCIENCE AND TECHNOLOGY DIRECTORATE (S&T) The NBAF program is constructing a state-of-the-art laboratory in Manhattan, Kansas to replace the Plum Island Animal Disease Center. The facility will enable the Department of Homeland Security (DHS) and the Department of Agriculture (USDA) to conduct research, develop vaccines, and provide enhanced diagnostic capabilities to protect against foreign animal, emerging, and zoonotic diseases that threaten the nation’s food supply, agricultural economy, and public health. Program is on track to meet May 2021 initial operational capability date. DHS and USDA have developed a transition plan and are coordinating on commissioning efforts. GAO last reported on this program in May 2018 (GAO-18-339SP). Not included The NBAF program was originally planned be a joint operation between DHS and USDA, with DHS taking the lead on construction and operation of the facility. However, the President’s budget request for fiscal year 2019 proposed transferring operational responsibility for NBAF, which includes operational planning and future facility operations, to USDA. In the Joint Explanatory Statement for the Consolidated Appropriations Act of 2018, congressional conferees specified that DHS would retain responsibility for completing construction of NBAF. As a result, DHS will continue to oversee and manage activities required to complete construction and achieve initial operational capability (IOC), which is facility commissioning. USDA will then be responsible for achieving full operational capability (FOC), including operational stand- up of the facility and all subsequent operations. The program’s acquisition program baseline (APB) has not yet been updated to reflect the change in responsibility for achieving FOC and to remove operational costs, which will now be budgeted for by USDA. NBAF officials said the transition introduces cost and schedule risks to the program because highly integrated activities—such as commissioning and operational stand-up—are now being managed by two different agencies, but DHS and USDA will continue to coordinate through the transition process. NBAF officials told GAO that construction activities thus far—such as pouring concrete for the main laboratory—have proceeded as anticipated and the program is on track to meet its APB cost and schedule goals through IOC, planned for May 2021. According to NBAF officials, the program has already received full acquisition funding for the facility construction efforts through federal appropriations and gift funds from the state of Kansas. The program previously planned to use operations and maintenance funding to support operational stand-up activities and awarded a contract for operational planning. However, beginning in fiscal year 2019, DHS will no longer request operations and maintenance funding for NBAF, as all such funding and activities will be the responsibility of USDA. Congressional conferees noted that $42 million in funding to USDA is to address operational stand-up activities and other initial costs to operate and maintain the facility. The Consolidated Appropriations Act of 2019 also authorized DHS to transfer personnel and up to $15 million in certain funds to USDA for contracts and associated support of the operations of NBAF. Science and Technology Directorate (S&T) NATIONAL BIO AND AGRO-DEFENSE FACILITY (NBAF) According to NBAF officials, the program has implemented a commissioning process for the facility to determine whether it can meet its sole key performance parameter (KPP) for laboratory spaces that meet various biosafety standards. NBAF officials said that DHS and USDA have been in coordination throughout the commissioning process. A third-party commissioning agent has been retained as a subcontractor to the prime construction management contractor, and NBAF officials said that the commissioning plan has been in place since 2012. According to NBAF officials, the commissioning agent worked with the facility design and construction teams to develop the commissioning plan, and detailed procedures are in place to install and commission equipment in the facility. The commissioning agent will monitor and test the facility’s equipment and building systems while construction is ongoing to ensure they are properly installed and functioning according to appropriate biosafety specifications. NBAF officials reported that they are coordinating with USDA officials, the commissioning agent, and federal regulators responsible for awarding the registrations needed for NBAF to conduct laboratory operations to determine how the final commissioning report will be structured to support FOC and federal certification to begin laboratory operations. In June 2019, DHS and USDA signed a memorandum of agreement that established plans to transfer NBAF operational responsibility from DHS to USDA. The memorandum establishes responsibilities related to costs and funding, requirements for establishing NBAF, and considerations for interagency coordination once NBAF is operational, among other things. For example, some USDA staff will participate in the NBAF commissioning process, but they will be integrated with DHS’s onsite construction oversight team to maintain the integrity of DHS’s existing oversight approach for the NBAF construction/ commissioning contract. The memorandum of agreement also states that DHS, in consultation with USDA, will plan for the appropriate timing and necessary mechanism to transfer identified DHS employees to USDA for NBAF activities. According to NBAF officials, DHS plans to transfer staff from both the Plum Island Animal Disease Center and the program’s on-site construction oversight team to USDA to preserve institutional knowledge. USDA was appropriated $3 million in the Consolidated Appropriations Act of 2018 to begin hiring NBAF operational staff and the memorandum of agreement notes that USDA will work with DHS to increase staffing in fiscal year 2019 as required by the construction commissioning schedule. In April 2019, the program’s staffing assessment was updated to reflect program needs from fiscal year 2019 through IOC. At that time, the NBAF officials reported that the program is fully staffed. NBAF officials reviewed a draft of this assessment and provided no comments. ADVANCED TECHNOLOGY (AT) TRANSPORTATION SECURITY ADMINISTRATION (TSA) The AT Program supports the checkpoint screening capability by providing capability to detect threats in the passenger’s carry-on baggage, including explosives, weapons, and other prohibited items. The AT-1 and AT-2 X-ray systems screen carry-on baggage providing threat detection capabilities for a wide range of threats. AT-2 Tier I and Tier II systems provide enhanced detection capabilities and improved image resolution. Computed technology (CT)—which offers enhanced three-dimensional imaging and detection capabilities over the currently deployed AT system—is also being procured through AT program. Both AT and CT units have experienced challenges achieving performance goals. Procurement and deployment of CT units will transfer to Checkpoint Property Screening System program. GAO last reported on AT as a part of the Passenger Screening Program in May 2018 (GAO-18- 339SP). In February 2018, Department of Homeland Security (DHS) leadership approved transitioning existing Passenger Screening Program (PSP) projects—including AT—into stand-alone programs to better align program office staffing to capabilities and focus on mitigating capability gaps, among other things. In fiscal year 2018, TSA determined that CT is the best technology available to address rapidly evolving threats in the transportation sector. As a result, TSA determined it would leverage the AT program to initiate the acquisition of CT systems. In December 2018, DHS leadership approved an acquisition program baseline (APB) for AT as a standalone program, which included cost and schedule goals for AT and CT that were presented separately. For AT, fiscal year 2018 and prior year costs were not included in the APB cost goals because those costs are considered sunk costs for PSP. AT does not have any acquisition costs because full operational capability for AT was achieved in 2016 under PSP. AT’s operations and maintenance (O&M) costs— which total $590 million—are related to maintaining AT-1 and AT-2 X-ray systems and incorporating upgrades to enhance detection capability and increase passenger volume through AT-2 Tier I and Tier II systems. When DHS leadership approved the APB, they also approved the acquisition decision event (ADE) 3—authorizing the procurement of CT units in fiscal year 2019 only. The APB includes acquisition costs for the fiscal year 2019 procurements but it does not identify any O&M costs for CT. In March 2019, DHS leadership acknowledged the AT program’s ADE 3 for AT-2 Tier II. The program previously achieved full operational capability (FOC) for AT-2, but ADE 3 was not achieved primarily because one the program’s key performance parameters (KPP) needed to be refined. The AT program’s surplus from fiscal years 2020-2024 may be overstated in DHS’s funding plan to Congress because costs associated with CT were not previously included in the AT cost estimate. However, the AT and CT costs in the affordability assessment are combined here. The purchase of CT units will become a separate acquisition for the fiscal year 2021 programming and budget cycle with an updated cost estimate. Transportation Security Administration (TSA) ADVANCED TECHNOLOGY (AT) In September 2018, the OTA completed certification, qualification and operational test and evaluation (OT&E) on CT systems from four different vendors. DHS’s Director, Office of Test and Evaluation (DOT&E) assessed the results in November 2018 and found that the systems from all four vendors did not meet the KPP related to throughput and the systems from two vendors also did not meet the KPP related to availability. Further, DOT&E rated the systems from the 4 vendors as operationally effective and operationally suitable with limitations. Cyber resiliency was not assessed. DOT&E recommended that TSA validate requirements, refine KPPs specific to the CT systems, and develop a plan to address cyber resilience issues prior to future deployment of networked systems, among other things. In August 2019, TSA officials said AT systems meet all four of the program’s KPPs. In September 2018, DOT&E reassessed the August 2016 follow-on operational test and evaluation (OT&E) results from AT-2 Tier II based on the program’s revised KPP for throughput—which contributed to DOT&E’s prior effectiveness rating. DOT&E confirmed that the system now meets the revised requirement based on a re- assessment of the test data against the new definition, but did not change the rating. TSA intends to transition the procurement and deployment of CT units, among other things, to the Checkpoint Property Screening System (CPSS), which, as of August 2019, had not yet been established. CPSS is a separate acquisition program that is intended to address capability gaps in passenger screening technologies. Through CPSS, TSA plans to eventually deploy CT to all checkpoints and replace AT X-ray technology. According to TSA officials, Automated Screening Lane (ASL) technologies have been managed by the AT program since March 2019. TSA is not incurring acquisition costs for ASLs, but the source of funding for O&M costs is unclear. DHS leadership directed TSA to begin tracking ASL maintenance and repairs to inform future budget requests, among other things. TSA officials stated that one of the program’s vacant positions has not yet been funded. To mitigate the staffing gap, TSA officials stated they are disbursing tasks among existing staff until the position is filled. TSA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. CREDENTIAL AUTHENTICATION TECHNOLOGY (CAT) TRANSPORTATION SECURITY ADMINISTRATION (TSA) The CAT system is used to verify and validate passenger travel and identification documents prior to entering secure areas in airports. CAT reads data and security features embedded in identification documentation (ID), verifies security features are correct, and displays authentication results to the operator. The CAT system also verifies the passenger has the appropriate flight reservation to progress through security screening and enter the secure area, among other things. Program met its key performance parameters, but needs to address cyber resiliency and other issues. CAT system will require regular updates to address changes to state identification documentation. GAO last reported on CAT as part of the Passenger Screening Program in May 2018. (GAO-18-339SP). Not included In February 2018, the Department of Homeland Security (DHS) approved transitioning existing Passenger Screening Program (PSP) projects, including CAT, into stand-alone programs to better align program office staffing to capabilities and focus on mitigating capability gaps, among other things. In December 2018, DHS leadership approved an acquisition program baseline (APB) for CAT as a stand-alone program. The APB reflected a revised testing and deployment strategy. Specifically, TSA no longer intends to pursue separate deployments of CAT for TSA Pre® and standard lanes. TSA concluded that the separate approach would extend the overall schedule to deploy CAT units to the field and was an inefficient use of resources. In February 2019, DHS leadership granted the program acquisition decision event (ADE) 3 for procurement and deployment of CAT units and acknowledged the program’s initial operational capability (IOC) based on the fielded units. TSA now plans to achieve full operational capability (FOC) in September 2022—more than 1 year earlier than previously planned for standard lanes, but 8 years later than initially planned under PSP. According to TSA officials, the program recently accelerated its deployment schedule to meet existing and emerging threats. The program developed an initial life-cycle cost estimate (LCCE) to inform the APB and ADE 3 and updated the estimate in June 2019 to inform the budget process. The program’s June 2019 LCCE reflects an O&M cost decrease of over $80 million, which TSA officials attribute to a reduction in enhancements needed to accelerate deployments. The program was not included in DHS’s funding plan to Congress for fiscal years 2020-2024 because the program is no longer expected to receive acquisition funding. TSA officials stated that they are working with TSA’s Chief Financial Officer and the CAT vendor to identify and mitigate any funding issues that may arise as the program moves into production. Transportation Security Administration (TSA) CREDENTIAL AUTHENTICATION TECHNOLOGY (CAT) DOT&E recommended that the program work with the vendor to improve the authentication rate of IDs, revise its KPP related to availability, conduct a study to understand passenger throughput and update throughput requirements accordingly, and conduct follow-on OT&E, among other things. In July 2019, TSA officials told GAO the program plans to conduct additional cyber resiliency testing and follow-on OT&E once requirements are refined. TSA officials stated that CAT is expected to be TSA’s primary identification verification method by the end of fiscal year 2019. However, TSA officials said the CAT system will require regular updates to address changes to state IDs. In November 2018, TSA officials reported that states are in the process of adopting new requirements identified in the REAL ID Act of 2005. Among other things, the Act establishes minimum security standards for ID issuance and production, and prohibits federal agencies from accepting IDs from states not meeting these standards unless the Secretary of Homeland Security has granted the issuing state an extension of time to meet the requirements. TSA officials said that the current manual process of verifying a passenger’s ID against their boarding pass will be used if CAT units are unavailable and between system updates. In May 2019, the program reported two critical staffing vacancies. TSA officials reported that these positions have been filled. TSA officials reviewed a draft of this assessment and provided no comments. ELECTRONIC BAGGAGE SCREENING PROGRAM (EBSP) TRANSPORTATION SECURITY ADMINISTRATION (TSA) Established in response to the terrorist attacks of September 11, 2001, EBSP tests, procures, and deploys transportation security equipment, such as explosives trace detectors and explosives detection systems, across approximately 440 U.S. airports to ensure 100 percent of checked baggage is screened for explosives. EBSP is primarily focused on delivering new systems with enhanced screening capabilities and developing software upgrades for existing systems. Follow-on testing completed in January 2019; initial results show improvement in effectiveness. EBSP is pursuing a new procurement strategy for two types of detection systems. GAO last reported on this program in May 2018 (GAO-18-339SP). In August 2019, TSA declared a cost breach of EBSP’s current acquisition program baseline (APB) due to increased maintenance costs. The program previously revised its APB in May 2016 to account for budget reductions and to implement the program’s strategy to prioritize funding to extend the life of screening technologies, among other things. TSA has implemented these changes through ongoing maintenance and system upgrades, to include detection algorithm updates. DHS officials reported that this strategy has improved security effectiveness and operational efficiencies at a lower cost than replacing legacy systems with new systems. However, this approach increased the number of systems that are out-of-warranty and increased the maintenance needed to sustain these systems. This new strategy, coupled with increased maintenance activities, resulted in an operations and maintenance (O&M) cost increase exceeding the program’s APB O&M cost threshold. As of September 2019, the program’s revised APB, which TSA officials said will address the O&M cost increase, had not yet been approved. In January 2018, DHS leadership approved the program’s request to deploy an explosives detection system with an advanced threat detection algorithm. TSA officials reported that they achieved initial operational capability (IOC) of these systems in February 2018; this is the program’s final APB milestone. TSA leadership subsequently approved the program to deploy detection algorithm updates to fielded systems. Based on the program’s July 2019 life-cycle cost estimate (LCCE), the program is projected to face an acquisition funding gap of $29 million over the 5-year period. However, the program’s total projected funding gap, including O&M, is expected to be approximately $223 million. TSA officials told GAO that one of their primary challenges is funding, and that to mitigate anticipated funding gaps, the program may shift other projects from one fiscal year to another or cancel them altogether. Transportation Security Administration (TSA) ELECTRONIC BAGGAGE SCREENING PROGRAM (EBSP) Since March 2011, DHS’s Director, Office of Test and Evaluation (DOT&E) has assessed the operational test and evaluation results of 11 EBSP systems and determined that six are effective and suitable. Most recently, DOT&E found that a medium speed explosives detection system with an advanced threat detection algorithm tested in May 2017 was effective with limitations and not suitable, primarily because of the increase in manpower needed to operate the system on a long-term, continuous basis. TSA officials reported that they have taken steps to mitigate the increase in manpower needed to operate these systems, such as enabling the use of different algorithms as appropriate. DOT&E previously found that a reduced-size stand-alone explosives detection system tested in March 2017 was suitable with limitations, but not effective because of multiple factors resulting in the inability of operators to maintain control of baggage. The program’s OTA completed follow-on OT&E on these systems in January 2019 and initial test results showed improvement in the system’s effectiveness rating. As of July 2019, EBSP has 1,678 explosives detection systems and 2,477 explosives trace detectors deployed nationwide. In February 2018, DHS leadership approved the program’s updated acquisition plan, which reflects a new procurement strategy. Under the new procurement strategy, the program will transition from procuring systems with different sizes and speeds to two types: (1) inline systems that integrate with a baggage handling system and are linked through a network, and (2) stand-alone systems that may be integrated with a baggage handling system, but not linked to a network. In addition, TSA officials reported that the new strategy reflects updates to EBSP’s vendor qualification process, which is intended to improve collaboration with vendors so they can develop more technically mature systems. In March 2018, DHS leadership approved a pilot effort in which TSA’s Chief Acquisition Executive (CAE) provides oversight of changes to deployed systems, including algorithm updates. According to TSA officials, this process is intended to limit some steps in the formal oversight process so capabilities can be deployed more rapidly. DHS leadership plans to assess this pilot process to determine its effectiveness. In May 2019, the program reported that the five vacant positions impact the program’s performance and execution schedules at times. To mitigate the staffing gap, program officials said that current staff are temporarily assuming additional duties. TSA officials stated that issues identified in DOT&E assessments were corrected, and that follow-on test activities were conducted and resulted in favorable evaluations and capability deployment. TSA officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. TECHNOLOGY INFRASTRUCTURE MODERNIZATION (TIM) TRANSPORTATION SECURITY ADMINISTRATION (TSA) The TIM program was initiated to address shortfalls in TSA’s threat assessment screening and vetting functions by providing a modern end-to-end credentialing system. The TIM system will manage credential applications and the review process for millions of transportation workers and travelers by supporting screening and vetting for Transportation Worker Identification Credential (TWIC) and TSA Pre®. Program achieved full operational capability for TWIC and TSA Pre® capabilities. Program met its four key performance parameters. GAO last reported on this program in May 2018 and October 2017 (GAO-18-339SP, GAO-18-46). In November 2018, Department of Homeland Security (DHS) leadership approved the TIM program’s request to descope and change its definition of full operational capability (FOC) to include only the TWIC and TSA Pre® capabilities. By the time TIM had fully delivered capabilities for TWIC and TSA Pre®, TSA had made ongoing updates and improvements to the remaining legacy vetting and credentialing systems to meet security and mission demands, which had also sufficiently met end user needs. According to TSA officials, any additional system development would produce redundant functionality. Going forward, the program plans to continue to modernize the legacy systems and to achieve additional efficiencies. The program updated its key acquisition documents, including its acquisition program baseline (APB) and life-cycle cost estimate (LCCE) to reflect the change in scope. In July 2019, DHS leadership approved program’s revised APB. DHS leadership granted the program acquisition decision event (ADE) 3 and acknowledged the program’s achievement of FOC—fulfilling TSA Pre® and TWIC mission needs for vetting and credentialing—in August 2019. DHS leadership previously approved a revised APB for the TIM program in September 2016. Prior to the approval of the program’s 2016 APB, DHS leadership paused new development for 22 months after the program breached its APB goals for various reasons including technical challenges. In July 2019, DHS headquarters conducted an independent cost assessment to inform ADE 3, which TSA adopted as the program’s LCCE. The revised LCCE reflected the program’s reduced scope. The program’s APB acquisition cost goal decreased by nearly $220 million from the program’s 2016 APB. The reduction in costs is primarily attributed to the reduction in the program’s scope. However, the program’s operations and maintenance APB cost goals increased by $205 million primarily due to maintenance of legacy systems to address user needs. Transportation Security Administration (TSA) TECHNOLOGY INFRASTRUCTURE MODERNIZATION (TIM) DOT&E recommended that the program address issues related to system usability by assessing the need for training materials and job aids to assist users. In addition, DOT&E recommended that the program update its cybersecurity threat assessment and continue to conduct periodic cyber resilience testing. In October 2017, GAO found that TSA had not fully implemented several leading practices to ensure successful agile adoption. GAO also found that TSA and DHS needed to conduct more effective oversight of the TIM program to reduce the risk of repeating past mistakes. DHS concurred with all 14 GAO recommendations to improve program execution and oversight, and identified actions DHS and TSA can take to address them. As of September 2019, TSA addressed all but one recommendation— to ensure DHS leadership reached consensus on, documented, and implemented oversight and governance changes for agile program reviews. TSA reported a critical staffing gap of four FTEs in 2019, including a manager position to adapt initiatives to agile business and development processes. TSA officials stated that the staffing gap has had minimal impact on program execution. To mitigate the gap, the program is leveraging support from contractors and matrixed staff. TSA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. FAST RESPONSE CUTTER (FRC) UNITED STATES COAST GUARD (USCG) The USCG uses the FRC to conduct search and rescue, migrant and drug interdiction, and other law enforcement missions. The FRC carries one cutter boat on board and is able to conduct operations in moderate sea conditions. The FRC replaces the USCG’s Island Class patrol boat and provides improved fuel capacity, surveillance, and communications interoperability with other Department of Homeland Security (DHS) and Department of Defense assets. FRC found operationally effective and suitable, and all key performance parameters validated. Defect in ship structure found, requiring changes in production and retrofits to cutters already delivered. GAO last reported on this program in May 2018 and March 2017 (GAO-18-339SP, GAO-17-218). The FRC program is on track to meet its current cost and schedule goals. USCG officials told GAO the program is revising its acquisition program baseline (APB) in 2019 to reflect an increase in FRCs. The USCG previously planned to acquire 58 FRCs and, as of August 2019, 35 had been delivered and another 21 were on contract. However, in fiscal years 2018 and 2019, congressional conferees supported funds for the acquisition of 4 additional FRCs to begin replacing 6 cutters currently operating in the Middle East. To account for the increase of up to 6 additional FRCs, USCG officials stated that they are revising the program’s acquisition documents and anticipate completing these updates by the end of calendar year 2019. To inform the budget process, the program updated its life-cycle cost estimate in June 2019 to reflect the additional 4 cutters that have been funded. The updated estimate remains within the program’s current APB cost thresholds. USCG officials stated that the contractor—Bollinger Shipyards LLC—is meeting the program’s current delivery schedule and the program is on track to achieve full operational capability (FOC) for the original 58 cutters by March 2027, as planned. However, the program’s FOC date will likely be extended to account for the delivery of the additional cutters in the revised APB. The program’s initial operational capability (IOC) date previously slipped due to a bid protest related to the program’s initial contract award—now known as the phase 1 contract—and the need for structural modifications. USCG officials attributed a subsequent 5-year slip in the program’s FOC date to a decrease in annual procurement quantities under the phase 1 contract. In May 2014, the USCG determined that it would procure only 32 of the 58 FRCs through this contract and initiated efforts to conduct full and open competition for the remaining 26 vessels— known as phase 2. In May 2016, the USCG awarded the phase 2 contract to Bollinger Shipyards LLC for the remaining 26 FRCs. Under the phase 2 contract, the USCG can procure 4 to 6 FRCs per option period. For fiscal year 2019, the USCG reported that it exercised an option for 6 FRCs. According to USCG officials, the phase 2 contract will need to be modified to increase the total quantity allowed under the current contract and account for the additional FRCs, but as of July 2019 the modifications had not been made. United States Coast Guard (USCG) FAST RESPONSE CUTTER (FRC) USCG officials stated that they are on track to resolve the remaining deficiencies by the end of fiscal year 2020. They added that these deficiencies will be resolved either through corrective action or a determination that the deficiency is not a hindrance to operations, requiring no further action. For example, the USCG officials reported taking corrective action in response to the FRC’s periodic inability to send communications due to antenna placement. USCG officials stated this was resolved by adding a second antenna. The USCG continues to work with Bollinger Shipyards LLC to address issues covered by the warranty and acceptance clauses for each ship. For example, in the fall of 2017, USCG officials reported identifying a latent defect that would affect the FRC’s ability to achieve its intended 25-year structural fatigue life. USCG officials said cracks were found in the interior steel structure of two FRCs, prompting a class-wide inspection. Upon further analysis, the USCG determined that the fatigue issues were due to faulty design assumptions and identified 12 areas of structural weakness that will require reinforcements to the ship’s interior steel structure. In response, USCG officials stated that the contractor developed corrective actions—ranging in complexity from adding bracket supports to removing and replacing large sections of steel—that have been approved by the USCG. USCG officials further stated that corrections are being incorporated during production, but FRCs that have already been delivered will need to be retrofitted during regular maintenance periods, scheduled through 2025. These officials added that these defects do not affect current operations. In addition, the contractor is undertaking retrofits for nine of the 10 engine issues covered by the warranty that are affecting the fleet—such as leaking exhaust pipes—and a prototype solution for the remaining issue is being assessed. As of June 2019, USCG officials reported the FRC’s warranty has resulted in $123 million in cost avoidance. In July 2019, USCG officials stated they had filled the one critical staffing gap and were in the process of hiring staff to address the remaining staffing gaps. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. H-65 CONVERSION/SUSTAINMENT PROGRAM (H-65) UNITED STATES COAST GUARD (USCG) The H-65 aircraft is a short-range helicopter that the USCG uses to fulfill its missions, including search and rescue, ports and waterways security, marine safety, and defense readiness. The H-65 acquisition program consists of eight discrete segments that incrementally modernize the H-65 aircraft fleet. The program is currently focused on the service life extension program (SLEP) and upgrades to the automatic flight control system (AFCS) and avionics. H-65 aircraft failed to meet two key performance parameters in testing; has not yet tested cyber resiliency. Program to synchronize upgrades into scheduled maintenance periods. GAO last reported on this program in May 2018 (GAO-18-339SP). In March 2018, Department of Homeland Security (DHS) leadership approved the program’s revised acquisition program baseline (APB), removing it from breach status, which USCG officials primarily attributed to underestimating the technical effort necessary to meet requirements. DHS leadership also granted the program approval for ADE 2C for low-rate initial production of the avionics and AFCS upgrades and ADE 2B for the addition of a SLEP. The SLEP is expected to extend the flight hour service life of each aircraft from 20,000 flight hours to 30,000 flight hours by replacing obsolete aircraft components. USCG officials stated the USCG plans to operate the H-65 aircraft until 2039 so that the USCG can prioritize funding for the Offshore Patrol Cutter. The USCG also plans to align its next helicopter acquisition effort with the Department of Defense’s future vertical lift acquisition plans. The program’s current APB reflects the restructured program schedule which synchronizes the SLEP with the avionics and AFCS upgrades. Specifically, the new program structure calls for completing the SLEP and upgrades to AFCS and avionics during the same scheduled maintenance period. This structure allows the USCG to leverage accessibility of components the program intends to replace as part of the SLEP while the aircraft is being assembled to accommodate the avionics and AFCS upgrades. As a result, USCG officials reported that the program will avoid some labor costs and will reduce the risk of damaging AFCS and avionics components which would need to be removed during the SLEP. In its current APB the program’s full operational capability (FOC) date was extended by nearly 2 years to September 2024, primarily to incorporate the SLEP. The program’s total life-cycle cost threshold decreased by approximately $200 million from its March 2014 APB, which USCG officials attributed to decreased labor costs, among other things. USCG officials told GAO they were in the process of updating the program’s key acquisition documents to inform the program’s ADE 3 decisions for full rate production of the avionics and AFCS upgrades and the SLEP. In July 2019, USCG officials said they do not plan to update the program’s APB for the upcoming ADEs because the program is on track and does not require changes to its cost, schedule, or performance goals. United States Coast Guard (USCG) H-65 CONVERSION/SUSTAINMENT PROGRAM (H-65) The USCG conducted a cybersecurity threat assessment for the H-65 in September 2016, but USCG officials stated cyber resilience was not included in initial OT&E because it was not a consideration at the time the testing was planned and the OTA needed more time to adequately plan for the testing. In May 2019, the program completed a cyber tabletop exercise to inform potential testing. However, it is unclear if this testing will be completed in time to inform ADE 3. The USCG awarded contracts to Rockwell Collins—the original equipment manufacturer of the legacy AFCS and avionics—for continued development of the AFCS and avionics upgrades in July 2016 and March 2017, respectively. USCG officials said they expect delivery of the upgrades to the fleet in May 2020. USCG officials said there is risk involved with extending the aircrafts’ service life beyond 20,000 flight hours since it has never been done by other agencies that operate the H-65. USCG officials stated that the aircraft manufacturer, Airbus, assisted the USCG’s chief aeronautical engineer in identifying parts that need replacement. As part of the program’s revised acquisition strategy, the USCG plans to synchronize the SLEP with the avionics and AFCS upgrades and conduct this work during the programmed depot maintenance cycles in fiscal years 2020 through 2024. USCG officials reported that this strategy allows the program to leverage the engineering and program management contractors already in place and ensures SLEP component availability before production support from Airbus ends in 2018. In April 2019, the USCG reported the program had one critical staffing gap—a deputy program manager. USCG officials reported the program filled the position in August 2019. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. LONG RANGE SURVEILLANCE AIRCRAFT (HC-130H/J) UNITED STATES COAST GUARD (USCG) The USCG uses HC-130H and HC-130J aircraft to conduct search and rescue missions, transport cargo and personnel, support law enforcement, and execute other operations. Both aircraft are quad-engine propeller-driven platforms. The HC-130J is a modernized version of the HC-130H, which has advanced engines, propellers, and equipment that provide enhanced speed, altitude, range, and surveillance capabilities. Design of new mission system processor is complete; USCG officials reported all key performance parameters met. Transfer of surplus HC-130H aircraft to other agencies delayed. GAO last reported on this program in May 2018 (GAO-18-339SP). As of July 2019, the USCG has yet to complete a more than 4-year effort to revise the acquisition program baseline (APB)—to account for significant program changes. Specifically, the USCG decided to pursue an all HC-130J fleet and, in fiscal year 2014, Congress directed the transfer of 7 HC-130H aircraft to the U.S. Air Force. The USCG was in the process of upgrading these aircraft but canceled further HC-130H upgrades. In September 2017, Department of Homeland Security (DHS) leadership directed the USCG to submit the revised APB by January 2018. As of July 2019, USCG officials had revised key acquisition documents such as the program’s life-cycle cost estimate (LCCE) and operational requirements document (ORD)—which will inform the program’s revised APB—but USCG officials told GAO the APB is not expected to be approved until August 2019. USCG officials said the re-baseline has been delayed, in part, because Congress directed the USCG to conduct a multi-phased analysis of its mission needs. In November 2016, the USCG submitted the results of its analysis for fixed-wing aircraft, which confirmed the planned total quantity of 22 HC-130J aircraft and an annual flight-hour goal of 800 hours per aircraft. The results of the analysis are reflected in the program’s revised LCCE, which DHS approved in June 2019. However, the USGC plans to decommission the HC-130H fleet by the end of fiscal year 2022, which may result in a capability gap since the program’s revised LCCE indicates that the fleet will consist of only 14 HC-130J aircraft in fiscal year 2022. In addition, the program’s revised ORD includes a full operational capability (FOC) date—when all 22 aircraft are operational and assigned to USCG air stations—of September 2033. The revised FOC date is more than 6 years beyond the program’s current threshold date of March 2027. GAO previously reported that the program was at risk of not meeting its previously planned FOC date because the USCG had not requested adequate funding. The program’s revised LCCE acquisition costs decreased in part because costs associated with the initially planned HC-130H improvements were removed. However, the program’s operations and maintenance costs increased by over $800 million over the program’s previous estimate, which is primarily attributed to a 13-year increase in the life expectancy of the HC-130J aircraft. United States Coast Guard (USCG) LONG RANGE SURVEILLANCE AIRCRAFT (HC-130H/J) According to USCG officials, the HC-130J has now met all seven of its key performance parameters (KPP). Previously, the program was unable to meet its KPPs related to the detection of targets and the aircraft’s ability to communicate with other assets. However, the USCG is replacing the mission system processor on its fixed-wing aircraft—including the HC-130J—with a system used by the U.S. Navy and DHS’s Customs and Border Protection. The new mission system processor is intended to enhance operator interface and sensor management and replace obsolete equipment. USCG officials said the design of the new mission system processor was approved in March 2018. The USCG does not plan to operationally test the new processor on the HC-130J, in part because the aircraft has already been tested. In 2009, DHS’s Director, Office of Test and Evaluation and the USCG determined the HC-130J airframe did not need to be operationally tested because the U.S. Air Force conducted operational testing on the base C-130J airframe in 2005. Instead, the USCG plans to operationally test the new mission system processor in fiscal year 2021 during operational testing on the C-27J, which is new to the USCG’s fixed-wing fleet. In addition, the USCG officials stated systems acceptance and delivery testing are conducted on each aircraft. In July 2019, USCG told GAO that all HC-130Js in the fleet are being outfitted with the new mission system processor. In December 2013, Congress directed the transfer of seven HC-130H aircraft to the U.S. Air Force for modifications—which consist of upgrades and installing a fire retardant delivery system—and subsequent transfer to the U.S. Forest Service. This direction factored into the USCG’s decision to pursue an all HC-130J fleet. However in August 2018, Congress directed that the U.S. Air Force transfer the modified aircraft to the state of California, Natural Resources Agency, for use by the Department of Forestry and Fire Protection. USCG officials reported seven aircraft will be transferred to the state of California, Natural Resources Agency, and the USCG does not plan to retain the surplus aircraft. As of July 2019, no HC-130H aircraft have been transferred. The USCG plans to procure a total of 22 HC-130Js. In July 2019, USCG officials reported 13 HC-130J aircraft had been delivered and USCG had awarded contracts for three more. At that time, the USCG also had 14 HC-130Hs in its inventory. The USCG planned to remove four of the HC-130Hs from service in 2019 as HC-130Js and C-27Js are delivered. USCG officials said the program is not experiencing any workforce issues as a result of its staffing gap. The program filled the one critical vacancy in August 2019 and is in the process of hiring staff to fill an additional vacancy. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. MEDIUM RANGE SURVEILLANCE AIRCRAFT (HC-144A/ C-27J) UNITED STATES COAST GUARD (USCG) The USCG uses HC-144A and C-27J aircraft to conduct all types of missions, including search and rescue and disaster response. All 32 aircraft—18 HC-144A aircraft and 14 C-27J aircraft—are twin-engine propeller driven platforms. The interior of both aircraft are able to be reconfigured to accommodate cargo, personnel, or medical transports. New mission system processor installed on five HC-144A aircraft. Program challenges related to purchasing spare parts and accessing technical data are improving. GAO last reported on this program in May 2018 (GAO-18- 339SP). In April 2019, Department of Homeland Security (DHS) leadership approved a change to the program’s current acquisition program baseline (APB) to adjust the program’s schedule milestones as a result of the fiscal year 2019 partial government shutdown. USCG officials told GAO that delays in funding limited contracted work for the program during the shutdown. USCG officials stated that the program could not recover from the lost time and, in response, DHS leadership authorized the program’s request for a 3-month extension on the program’s future APB milestones. The current APB was approved in August 2016 to reflect the restructuring of the HC-144A acquisition program. The USCG initially planned to procure a total of 36 HC-144A aircraft, but reduced that number to the 18 it had already procured after Congress directed the transfer of 14 C-27J aircraft from the U.S. Air Force to the USCG in fiscal year 2014. The program’s APB divides the program into two phases. Phase 1 includes acceptance of the 18 HC-144A aircraft and upgrades to the aircraft’s mission and flight management systems. Phase 2 includes acceptance of and modifications to the C-27J aircraft to meet the USCG’s mission needs. In July 2019, USCG officials said that the program had completed upgrades on five HC-144A aircraft and plans to complete upgrades on all HC-144As by September 2021. For phase 2, the USCG has accepted all 14 C-27Js from the U.S. Air Force and plans to complete the modification of these aircraft by June 2025 to achieve full operational capability (FOC). To inform the budget process, in June 2019 the program updated its life-cycle cost estimate (LCCE), which is within its current APB cost thresholds. The program’s total life-cycle cost decreased by approximately $115 million. USCG officials attribute the decrease to refinement of the cost estimate based on actual costs, changes to the schedule for the mission system upgrades, and a delay in operating missionized C-27Js—which reduces the total estimated aircraft flight hours—among other things. USCG officials said that they plan to delay operation of missionized C-27Js to ensure adequate logistics support is available for the aircraft. In addition, congressional conferees supported $18 million in fiscal year 2018 for the USCG to purchase a flight simulator for training purposes. According to USCG officials, prioritizing the procurement of the flight simulator in fiscal year 2018 addressed C-27J training needs and provided over $15 million in cost savings for the program. United States Coast Guard (USCG) MEDIUM RANGE SURVEILLANCE AIRCRAFT (HC-144A/C-27J) Neither the HC-144A nor the C-27J will be able to meet two of their seven key performance parameters (KPP) until the USCG installs a new mission system processor on the aircraft. These two KPPs are related to the detection of targets and the aircraft’s ability to communicate with other assets. The USCG is replacing the mission system processor on its fixed-wing aircraft—including the HC-144A and C-27J— with a system used by the U.S. Navy and DHS’s Customs and Border Protection. The new mission system processor is intended to enhance operator interface and sensor management and replace obsolete equipment. The program plans to conduct developmental testing on the C-27J in fiscal year 2020, once the prototype is complete. In addition, the USCG plans to operationally assess the new mission system processor during operational testing of the C-27J, which is scheduled to begin in fiscal year 2021. GAO previously found that the program faced challenges purchasing spare parts and accessing technical data for the C-27J, which was affecting the USCG’s ability to transition the aircraft into the fleet. USCG officials told GAO that these issues are improving. Specifically, they stated that program awarded two contracts for spare parts to third-party suppliers in early 2018 and purchased spare parts in bulk in 2017 to maintain the fleet. In July 2019, USCG officials said the program has been able to stock sites well enough to keep assets available for use, and will continue to work with the contractors to address the issue. USCG officials said that a contract was awarded to the original equipment manufacturer in April 2017 that allows the USCG appropriate rights to the technical data. Also, in August 2019, USCG officials told GAO they received all C-27J technical data in the Air Force’s possession, including operations and maintenance manuals, as part of the transfer of 14 C-27J aircraft from the Air Force to the Coast Guard. USCG officials told us that the program updated its acquisition plan in February 2018 to incorporate the procurement of a new full-motion flight simulator training device for the C-27J aircraft. The USCG received funding to purchase a flight simulator in fiscal year 2018 and plans to begin instructor training on the device in August 2019. In July 2019, USCG officials told GAO that the program’s staffing is not negatively impacting program execution. USCG officials explained that they have filled four of the program’s reported staffing vacancies and plan to fill the remaining position soon. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. NATIONAL SECURITY CUTTER (NSC) UNITED STATES COAST GUARD (USCG) The USCG uses the NSC to conduct search and rescue, migrant and drug interdiction, environmental protection, and other missions. The NSC replaces and provides improved capabilities over the USCG’s High Endurance Cutters. The NSC carries helicopters and cutter boats, provides an extended on-scene presence at forward deployed locations, and operates worldwide. Follow-on operational testing was completed in 2018, but unmanned aerial surveillance aircraft testing was delayed. The USCG continues to address issues identified with the NSC propulsion system. GAO last reported on this program in May 2018 and April 2017 (GAO-18-339SP, GAO-17- 218). In November 2017, Department of Homeland Security (DHS) leadership approved a revised acquisition program baseline (APB), which accounted for the addition of a ninth NSC to the program of record. The USCG originally planned to acquire eight NSCs; however, in fiscal year 2016 Congress appropriated funds specifically for the production of a ninth NSC. Congressional conferees subsequently included in fiscal year 2018 $540 million and $635 million to be immediately available and allotted to contract for production of a 10th NSC and purchase of long lead time materials and production of an 11th NSC, respectively. According to USCG officials, the USCG awarded a contract to produce the ninth NSC in December 2016 and awarded a production contract for the 10th and 11th NSCs in December 2018. As of August 2019, eight NSCs have been delivered and the remaining three NSCs are under contract for production. USCG officials reported that the program is currently on track to meet its current APB schedule and anticipate delivery of the ninth NSC in September 2020. However, the program’s full operational capability (FOC) date is expected to be extended until 2024 as a result of the anticipated delivery of the 11th NSC in January 2024. According to USCG officials, the program’s acquisition documentation, including the APB, is being revised to reflect the additional NSCs and these updates are expected to be complete by July 2020. To inform the budget process, the program updated its LCCE to include the 10th and 11th NSCs. As a result, the program’s life-cycle costs exceed the current APB thresholds. Despite this cost growth, the program’s total life-cycle cost is still less than the program’s initial estimate for eight ships. USCG officials attribute the overall decrease to more accurate estimates and reduced operations and maintenance (O&M) costs. The program’s current APB cost thresholds already reflect cost growth that occurred earlier in the program, when the program implemented several design changes to address equipment issues. As of September 2017, 12 equipment systems had design changes, which USCG estimated cost over $260 million. This work includes structural enhancements on the first two NSCs and the replacement of the gantry crane, which aids in the deployment of cutter boats. United States Coast Guard (USCG) NATIONAL SECURITY CUTTER (NSC) USCG officials said the USCG completed a study directed by DHS’s USM to identify the root cause of engine issues with the NSC’s propulsion systems. GAO previously reported on these issues—including high engine temperatures and cracked cylinder heads—in January 2016. USCG officials reported that the study resulted in nine corrective measures, eight of which are in various stages of implementation. According to USCG officials, they will assess the need to implement the remaining corrective measure following completion of the others. According to program officials, the USCG relies on the Navy to request funding for and provide certain systems on the NSC such as the Close In Weapon System, which includes a radar-guided gun used to protect against anti-ship cruise missiles. USCG officials reported that some of these Navy systems may not be available in time to support the production of the ninth, 10th and 11th NSCs, since these cutters were unplanned additions to the NSC program and the Navy had not included funding for some of these systems in its budget requests. According to program officials, they are working with the Navy to identify options to mitigate this issue. Officials stated that an option being considered is constructing the NSCs with space available for the Navy equipment to be installed after delivery. USCG officials said the program’s staffing vacancies had not negatively affected program execution and, as of September 2019, all three vacancies had been filled. The program’s staffing profile represents staffing requirements through NSC 11, and USCG officials reported that the program office would need to reassess future staffing requirements if the USCG acquires additional NSCs. USCG officials stated that with the exception of small unmanned aerial surveillance aircraft, follow-on OT&E testing is completed. Additional testing are planned in fiscal year 2020. A comprehensive update of the program’s LCCE is being drafted to reflect costs of the 10th and 11th NSC. The program will base the cost goals of the next revision to the APB on this update. The next revision of the APB will include a revised FOC date based on delivery of the 11th NSC in January 2024. USCG officials also provided technical comments on a draft assessment, which GAO incorporated as appropriate. OFFSHORE PATROL CUTTER (OPC) UNITED STATES COAST GUARD (USCG) The USCG plans to use the OPC to conduct patrols for homeland security, law enforcement, and search and rescue operations. The OPC is being designed for long-distance transit, extended on-scene presence, and operations with deployable aircraft and small boats. It is intended to replace the USCG’s aging Medium Endurance Cutters (MEC) and bridge the operational capabilities provided by the Fast Response Cutters and National Security Cutters. Shipyard sustained damage in Hurricane Michael, expected to result in program cost and schedule changes. USCG assessing the effects from hurricane and plans to identify a path forward in early fiscal year 2020. GAO last reported on this program in May and July 2018 (GAO-18-339SP, GAO-18-629T). In May 2018, the Department of Homeland Security (DHS) approved a revised life-cycle cost estimate (LCCE) for the OPC program, which officials said reflects a refinement of the OPC design and planned systems—including a weight increase of 27 percent—and the incorporation of actual contract data, among other things. The USCG is not reporting a cost increase because the amount of OPC acquisition costs that the program plans to fund, approximately $10.3 billion, remains within the program’s acquisition program baseline (APB) cost thresholds. However, the revised LCCE included a shift of some costs that were previously planned to be funded by the program to other sources, such as other parts of the USCG or the U.S. Navy. This government-furnished equipment, which is now estimated to cost nearly $2 billion, will largely be funded by the U.S. Navy, according to USCG officials. Overall, the total program acquisition costs increased by approximately $1.7 billion from the previous estimate. In October 2018, the shipbuilder, Eastern Shipbuilding Group, suffered damage as a result of Hurricane Michael. The shipbuilder reported to the USCG in May 2019 that it can no longer afford the estimated costs associated with the OPC contract without assistance from the government. In January 2019, the shipbuilder resumed construction of the lead ship, but the damages sustained have resulted in a long-term degradation of their ability to produce the OPCs at the previously estimated cost and schedule. The shipbuilder has projected hundreds of millions of dollars in increased contract costs—which it attributes to anticipated skilled labor shortages and a loss of production efficiencies—and a 9- to 12-month delivery delay for each of the first nine ships. Despite these anticipated cost increases and schedule delays, as of July 2019, USCG officials said they had not formally notified DHS leadership of a potential cost or schedule breach because they are continuing to assess how to move forward. DHS leadership granted the program a 3-month extension to achieve its acquisition decision event (ADE) 2C in December 2019 to mitigate impacts from the fiscal year 2019 partial government shutdown. USCG officials said they are preparing for the ADE 2C, but also are using the additional time to assess the shipbuilder’s report, analyze estimates, and determine a path forward by early fiscal year 2020. United States Coast Guard (USCG) OFFSHORE PATROL CUTTER (OPC) The USCG currently plans to conduct initial operational test and evaluation (OT&E) on the first OPC in fiscal year 2023. However, the test results from initial OT&E will not be available to inform key decisions. For example, they will not be available to inform the decision to build two OPCs per year, which USCG officials said is currently scheduled for fiscal year 2021. Without test results to inform these key decisions, the USCG may need to make substantial commitments prior to knowing how well the ship will meet its requirements. According to USCG program officials, they have established a team with representatives from DHS, USCG, and the U.S. Navy to assess the impact of Hurricane Michael and determine a way forward. As part of its assessment, these officials said they are evaluating a number of options, including modifications to the original contract. Regardless of the path forward, USCG officials stated the program will likely need congressional approval of the contracting strategy and financial resources necessary to execute the new plan. USCG officials stated that DHS leadership will review the program’s status and determine whether to authorize the construction of OPC 2 and the purchase of initial materials needed for OPC 3 at the program’s ADE 2C. USCG officials stated that they anticipate the exercise of a contract option for the construction of OPC 2 and the materials for OPC 3 will be delayed as the program and shipbuilder continue to assess the impact of the hurricane on OPC production. The OPC program is continuing to increase staffing as the program matures and production activities increase. In July 2019, USCG officials said the program has a staffing gap of five FTEs, none of which are critical. Officials said they were in the process of hiring staff to fill these positions. USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. POLAR SECURITY CUTTER (PSC) UNITED STATES COAST GUARD (USCG) The PSC program—formerly designated as the Heavy Polar Icebreaker—is intended to assist the USCG in maintaining access to Arctic and Antarctic polar regions. The USCG requires its icebreaking fleet to conduct multiple missions, including defense readiness; marine environmental protection; ports, waterway, and coastal security; and search and rescue. The USCG plans to acquire three PSCs to recapitalize its heavy polar icebreaker fleet, which currently consists of one operational ship. DHS identified three critical technologies in its June 2019 technology readiness assessment of the program. Program awarded a $750 million detail design and construction contract to VT Halter Marine in April 2019. GAO last reported on this program in May and September 2018 (GAO-18-339SP, GAO-18- 600). In January 2018, Department of Homeland Security (DHS) leadership approved the program’s initial acquisition program baseline (APB), establishing cost, schedule, and performance goals. The program achieved a combined acquisition decision event (ADE) 2A/2B in February 2018, which authorized the initiation of development efforts. However, in September 2018, GAO found that the program’s schedule and cost estimates are optimistic. Specifically, GAO found that the program’s planned delivery dates are not informed by a realistic assessment of shipbuilding activities. Instead, the schedule is driven by the potential gap in icebreaking capabilities once the USCG’s only operational heavy polar icebreaker reaches the end of its service life. As a result, the program is at risk of experiencing schedule delays. Similarly, GAO found that the program’s life-cycle cost estimate (LCCE) adheres to most cost estimating best practices but is not fully reliable. This was due, in part, to the cost estimate not quantifying the range of possible costs over the entire life of the program. As a result, the program is at risk of costing more than estimated. In April 2019, the program awarded a $746 million contract to VT Halter Marine for the detail design and construction of the lead PSC. According to USCG officials, the program is revising both the program schedule and cost estimate with information from the shipbuilder. For example, delivery of the lead ship in the awarded contract is anticipated in May 2024—2 months after the program’s APB threshold date. In addition, the program updated its LCCE in June 2019 to inform the budget process, but this estimate does not reflect cost changes as a result of the contract award. USCG officials acknowledged the schedule and cost risks identified by GAO and plan to address these risks as part of the acquisition documentation updates. From 2013 through 2019, the program received $1.035 billion in funding—$735 million in USCG appropriations and $300 million in Navy appropriations. USCG officials stated that the lead ship is fully funded but any funding gaps in the future may result in delays to delivery of the two follow-on ships. United States Coast Guard (USCG) POLAR SECURITY CUTTER (PSC) DHS leadership approved four key performance parameters related to the ship’s ability to independently break through ice, the ship’s operating duration, and communications. From May to August 2017, the USCG conducted model testing of potential hull designs and propulsion configurations. USCG officials stated that maneuverability was identified as a challenge during model testing and that azimuthing propulsors—propellers that sit below the ship and can rotate up to 360 degrees—offered better maneuverability for the PSC than traditional propulsion systems. According to USCG officials, the PSC program began additional model testing related to ice models and seakeeping in August 2019. In November 2017, DHS’s Director, Office of Test and Evaluation approved the program’s test and evaluation master plan, which calls for initial operational testing of performance to begin in fiscal year 2024, after delivery of the first PSC. In response to a September 2018 GAO recommendation, DHS’s Science and Technology Directorate completed a technology readiness assessment of the program in June 2019. DHS determined that the PSC has three critical technologies that are mature or approaching maturity: azimuthing propulsors, the integrated electric propulsion system, and the hull form. For the hull form—the only critical technology designated as not yet mature—the Coast Guard plans to use ice model and seakeeping testing to reduce risks. USCG officials stated that they are planning to reassess the critical technologies using information from VT Halter Marine by the preliminary design review scheduled for January 2020. The USCG established an integrated program office and ship design team with the Navy and, in 2017, DHS, the USCG, and the Navy entered into several agreements that outline major roles and responsibilities, including the Navy’s role in contracting on behalf of the Coast Guard. The ship design team provided technical oversight for the development of the PSC’s concept designs, which the USCG used to inform the ship’s specifications and program’s life-cycle cost estimate. According to USCG officials, as of July 2019, the USCG and the Navy established a project residence office of three staff at the shipbuilder’s facility in Pascagoula, Mississippi to provide oversight of shipbuilding efforts. In April 2019, USCG reported that it is increasing the required staffing level for the program as it matures, with 5 FTEs added in fiscal year 2019. According to program officials, as of July 2019, three of these five vacancies—including the commanding officer and executive officer of the project resident office—have been filled. USCG officials said the remaining positions were being addressed by active duty USCG staff and through the civilian hiring process. In September 2018, GAO made six recommendations to DHS, the USCG, and the Navy to address risks GAO identified with the PSC program. As of August 2019, three of the six recommendations remain open. USCG officials stated that the PSC program awarded a contract for the detail design and construction of up to three cutters to VT Halter Marine in April 2019—ahead of schedule. USCG officials added that the program has either addressed or is in the process of addressing all of GAO’s recommendations contained in GAO-18-600, including an update to the schedule and cost estimate to reflect the award to VT Halter Marine. USCG officials also provided technical comments on a draft assessment, which GAO incorporated as appropriate. UNITED STATES CITIZENSHIP AND IMMIGRATION SERVICES (USCIS) The Transformation program was established in 2006 to transition USCIS from a fragmented, paper-based filing environment to a consolidated, paperless environment for electronically processing immigration and citizenship applications. The program is delivering system capability through releases that either deploy electronic, web-based application forms or improve system functionality. Program revised key performance parameters to reflect the program’s new baseline. Program reorganized to leverage USCIS expertise and focus on system functionality. GAO last reported on this program in May 2018 and July 2016 (GAO-18-339SP, GAO-16- 467). In June 2018, Department of Homeland Security (DHS) leadership approved Transformation’s revised acquisition program baseline (APB) and subsequently removed the program from breach status—lifting a strategic pause that had limited new program development for 18 months. The program experienced a schedule breach in September 2016 when it failed to upgrade to USCIS’s application processing information system to include applications for naturalization. The new baseline modified the program’s cost, schedule, and performance parameters and reflects changes to the way the program delivers capabilities and a new acquisition strategy. Specifically, the new APB revised the scope of the Transformation program to focus on improving functionality—such as application processing time. Under the prior strategy, the program was focused on adding new applications or forms—from four separate lines of business—to the upgraded processing system. The program plans to complete major development work in September 2019 and achieve full operational capability (FOC) in March 2020. Despite the 18-month pause in development, the program’s FOC dates slipped only 1 year from its previously revised APB. In August 2019, USCIS officials reported that the program is on track to meet its revised schedule goals. In its revised APB, the program’s acquisition cost threshold decreased from its previous APB by approximately $200 million primarily because the program shifted costs to operations and maintenance (O&M) to align with DHS’s new common appropriations structure. As a result of this shift in costs and because the new APB extended the program’s life cycle by 2 years, O&M costs increased by nearly $800 million from the program’s previous APB. In June 2019, the program updated its LCCE again to inform the budget process, which is within its APB cost thresholds. As part of its re-baselining efforts, the Transformation program updated its operational requirements document. The program removed six of its eight key performance parameters (KPP) that were specific to prior Transformation releases, revised two KPPs related to system reliability and availability, and added two new KPPs related to system lead time and cybersecurity. USCIS officials noted that these changes were made to make the KPPs more measurable and testable throughout development and delivery of the capability. The program also updated its test and evaluation master plan (TEMP) to adjust operational assessments to focus on the program’s revised goals under the updated baseline, among other things. The revised TEMP includes plans for three operational assessments that cover (1) development efforts initiated prior to the Transformation program’s June 2018 re-baseline, (2) new development, and (3) cybersecurity. In March 2019, the program’s OTA completed an operational assessment (OA) of capability developed and released since the program re-baselined in June 2018. The OTA found that the program is meeting all four of its revised KPPs. The OTA recommended the program take steps to plan for cyber resilience testing and evaluation. The OTA plans to conduct a separate OA to assess cybersecurity by September 2019 and plans to complete initial operational test and evaluation of the entire system by December 2019. In September 2016, the Transformation program breached its schedule baseline when persistent system deficiencies forced the program to revert 84,000 monthly applications for naturalization forms from an upgraded application information system to a legacy platform. USCIS officials said the program had previously prioritized an ambitious release schedule over needed functionality. In response, USCIS dismantled the program office and repositioned Transformation under the USCIS Office of Information Technology so the program could leverage additional engineering expertise. According to officials, the program has also focused on activities like prototyping and beta testing forms, and is deploying updates as targeted changes to specific forms or functionality rather than major system upgrades. The program previously made significant changes after it experienced a 5-month delay in 2012. DHS attributed this delay to weak contractor performance and pursuing an unnecessarily complex system, among other things. To address these issues, the Office of Management and Budget, DHS, and USCIS determined the program should implement an agile software development methodology and increase competition for development work. These changes were reflected in the program’s April 2015 revised baseline. In July 2019, the program office reported that it is working to fill staffing vacancies, but the gap has not had a negative impact on program execution. In the meantime, the program is mitigating the gap with existing staff and contractors. However, officials noted that if positions remain unfilled, the program could experience schedule delays, among other things. USCIS officials reviewed a draft of this assessment and provided no comments. Appendix II: Objectives, Scope, and Methodology The objectives of this audit were designed to provide congressional committees insight into the Department of Homeland Security’s (DHS) major acquisition programs. We assessed the extent to which (1) DHS’s major acquisition programs are on track to meet their schedule and cost goals and (2) current program baselines trace to key acquisition documents. To address these questions, we selected 29 of DHS’s 80 major acquisition programs. We selected all 17 of DHS’s Level 1 acquisition programs—those with life-cycle cost estimates (LCCE) of $1 billion or more—that had at least one project, increment, or segment in the Obtain phase—the stage in the acquisition life cycle when programs develop, test, and evaluate systems—at the initiation of our audit. Additionally, we reviewed 12 other major acquisition programs—including 6 Level 1 programs that either had not yet entered or were beyond the Obtain phase, and 6 Level 2 programs that have LCCEs between $300 million and less than $1 billion—that we identified were at risk of not meeting their cost estimates, schedules, or capability requirements based on our past work and discussions with DHS officials. Specifically, we met with representatives from DHS’s Office of Program Accountability and Risk Management (PARM)—DHS’s main body for acquisition oversight— as a part of our scoping effort to determine which programs (if any) were facing difficulties in meeting their cost estimates, schedules, or capability requirements. The 29 selected programs were sponsored by eight different components, and they are identified in table 8, along with our rationale for selecting them. To determine the extent to which DHS’s major acquisition programs are on track to meet their schedule and cost goals, we collected key acquisition documentation for each of the 29 programs, such as all LCCEs and acquisition program baselines (APB) approved at the department level since DHS’s current acquisition management policy went into effect in November 2008. DHS policy establishes that all major acquisition programs should have a department-approved APB, which establishes a program’s critical cost, schedule, and performance parameters, before they initiate efforts to obtain new capabilities. Twenty- seven of the 29 programs had one or more department-approved LCCEs and APBs between November 2008 and August 31, 2019. We used these APBs to establish the initial and current cost and schedule goals for the programs. We then developed a data collection instrument to help validate the information from the APBs and collect similar information from programs without department-approved APBs. Specifically, for each program, we pre-populated data collection instruments to the extent possible with the schedule and cost information we had obtained from the APBs and our prior assessments (if applicable) to identify schedule and cost goal changes, if any, since (a) the program’s initial baseline was approved and (b) December 2017—the data cut-off date of our 2018 assessment. We shared our data collection instruments with officials from the program offices to confirm or correct our initial analysis and to collect additional information to enhance the timeliness and comprehensiveness of our data sets. We then met with program officials to identify causes and effects associated with any identified schedule and cost goal changes, including changes as a result of the fiscal year 2019 partial government shutdown. Subsequently, we drafted preliminary assessments for each of the 29 programs, shared them with program and component officials, and gave these officials an opportunity to submit comments to help us correct any inaccuracies, which we accounted for as appropriate (such as when new information was available). Additionally, in July 2018 and July 2019, we obtained copies of the detailed data on affordability that programs submitted to inform the fiscal year 2019 and 2020 resource allocation processes. We also obtained copies of any annual LCCE updates programs submitted in fiscal years 2018 and 2019. For each of the 27 programs with a department-approved APB, we compared (a) the most recent cost data we collected (i.e., a department-approved LCCE, the detailed LCCE information submitted during the resource allocation process, an annual LCCE update, or an update provided by the program office) to (b) DHS’s funding plan presented in the Future Years Homeland Security Program (FYHSP) report to Congress for fiscal years 2020-2024, which presents 5-year funding plans for DHS’s major acquisition programs, to assess the extent to which a program was projected to have an acquisition funding gap. Through this process, we determined that our data elements were sufficiently reliable for the purpose of this engagement. The FYHSP reports information by the department’s new common appropriation structure, which created standard appropriation fund types including (1) procurement, construction, and improvements and (2) operations and support. We refer to these types of funding as (1) acquisition and (2) operations and maintenance throughout this report. current version of the guidance when we initiated our review. We reviewed each program’s most recent APB to determine whether the APB referenced the documents that were used as the basis of its cost, schedule, and performance parameters. We asked program officials to provide the underlying documentation if the APB did not reference a document. We then compared the APB cost, schedule, and performance parameters to the information in the underlying documents. Specifically, we compared the approved LCCE to the APB objective and threshold cost values, the operational requirements document to the APB key performance parameters, and the integrated master schedule to the APB schedule goals. We determined that the cost and performance goals for a program were traceable if the information from the underlying documentation was the same as the cost and performance parameters in the APB. We determined that program schedule goals were traceable to the integrated master schedule, if all future baseline milestones identified in the APB were identified in the integrated master schedule. In addition, the milestone date from the integrated master schedule was within the range of the objective and threshold schedule goals identified in the APB. We did not include programs in our analysis with APBs approved before DHS updated its acquisition policy in March 2016 because they were developed under previous guidance when the requirements for developing APBs were different. We also did not include the APBs approved after DHS updated its acquisition policy in February 2019 because the update was not in place when we initiated this review. In addition, we interviewed officials from headquarters organizations, including PARM, to discuss how policies related to developing APBs are being implemented and clarify requirements for establishing APB parameters. We interviewed component and program officials to identify causes of inconsistencies between the approved APB and documents that provided the basis for approved cost, schedule, and performance parameters. We conducted this performance audit from April 2018 through December 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Homeland Security Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact listed above, Rick Cederholm (Assistant Director), Alexis Olson (Analyst-in-Charge), Whitney Allen, Leigh Ann Haydon, Khaki LaRiviere, Sarah Martin, and Kelsey Wilson made key contributions to this report. Other contributors included Mathew Bader, Andrew Burton, Erin Butkowski, John Crawford, Aryn Ehlow, Lorraine Ettaro, Laurier R. Fish, Alexandra Gebhard, Elizabeth Hosler-Gregory, Stephanie Gustafson, Jason Lee, Claire Li, Ashley Rawson, Jillian Schofield, Roxanna Sun, Anne Louise Taylor, and Lindsay Taylor. Related GAO Products Homeland Security Acquisitions: Opportunities Exist to Further Improve DHS’s Oversight of Test and Evaluation Activities. GAO-20-20. Washington, D.C.: October 24, 2019 High-Risk Series: Substantial Efforts Needed to Achieve Greater Progress on High-Risk Areas. GAO-19-157SP. Washington, D.C.: March 6, 2019. Coast Guard Acquisitions: Polar Icebreaker Program Needs to Address Risks before Committing Resources. GAO-18-600. Washington, D.C.: September 4, 2018. DHS Acquisitions: Additional Practices Could Help Components Better Develop Operational Requirements. GAO-18-550. Washington, D.C.: August 8, 2018. Southwest Border Security: CBP Is Evaluating Designs and Locations for Border Barriers but Is Proceeding Without Key Information. GAO-18-614. Washington, D.C.: July 30, 2018. Coast Guard Acquisitions: Actions Needed to Address Longstanding Portfolio Management Challenges. GAO-18-454. Washington, D.C.: July 24, 2018. Homeland Security Acquisitions: Leveraging Programs’ Results Could Further DHS’s Progress to Improve Portfolio Management. GAO-18-339SP. Washington, D.C.: May 17, 2018. DHS Program Costs: Reporting Program-Level Operations and Support Costs to Congress Would Improve Oversight. GAO-18-344. Washington, D.C.: April 25, 2018. Border Security: Additional Actions Could Strengthen DHS Efforts to Address Subterranean, Aerial, and Maritime Smuggling. GAO-17-474. Washington, D.C.: May 1, 2017. Homeland Security Acquisitions: Identifying All Non-Major Acquisitions Would Advance Ongoing Efforts to Improve Management, GAO-17-396. Washington, D.C.: April 13, 2017. Homeland Security Acquisitions: Earlier Requirements Definition and Clear Documentation of Key Decisions Could Facilitate Ongoing Progress. GAO-17-346SP. Washington, D.C.: April 6, 2017. Homeland Security Acquisitions: Joint Requirements Council’s Initial Approach Is Generally Sound and It Is Developing a Process to Inform Investment Priorities. GAO-17-171. Washington, D.C.: October 24, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: March 31, 2016. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. Homeland Security Acquisitions: DHS Should Better Define Oversight Roles and Improve Program Reporting to Congress. GAO-15-292. Washington, D.C.: March 12, 2015. Homeland Security Acquisitions: DHS Could Better Manage Its Portfolio to Address Funding Gaps and Improve Communications with Congress. GAO-14-332. Washington, D.C.: April 17, 2014. Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs. GAO-12-833. Washington, D.C.: September 18, 2012.
Why GAO Did This Study Each year, the DHS invests billions of dollars in a diverse portfolio of major acquisition programs to help execute its many critical missions. DHS plans to spend more than $10 billion on these programs in fiscal year 2020 alone. DHS's acquisition activities are on GAO's High Risk List, in part, because of management and funding issues. The Explanatory Statement accompanying the DHS Appropriations Act, 2015 included a provision for GAO to review DHS's major acquisitions on an ongoing basis. This report, GAO's fifth review, assesses the extent to which: (1) DHS's major acquisition programs are on track to meet their schedule and cost goals, and (2) current program baselines trace to key acquisition documents. GAO assessed 27 acquisition programs, including DHS's largest programs that were in the process of obtaining new capabilities as of April 2018, and programs GAO or DHS identified as at risk of poor outcomes. GAO assessed cost and schedule progress against baselines; compared APB cost, schedule and performance parameters to underlying documents used in establishing baselines; and interviewed DHS officials. What GAO Found As of August 2019, 25 of the 27 Department of Homeland Security (DHS) programs GAO assessed that had approved schedule and cost goals were on track to meet current goals. The remaining two programs breached their schedule or cost goals. This represents an improvement since GAO's last review. However, GAO found that some of the programs that were on track as of August 2019 are at risk of not meeting cost or schedule goals or both in the future. For example, the U.S. Coast Guard's Offshore Patrol Cutter program faces potential cost increases and schedule slips in the future as a result of damages to the shipbuilder's facility from Hurricane Michael in October 2018. Traceability, which is called for in DHS policy and GAO scheduling best practices, helps ensure that program goals are aligned with program execution plans, and that a program's various stakeholders have an accurate and consistent understanding of those plans and goals. Of the 27 programs GAO assessed, 21 had established baselines after DHS updated its acquisition policy in March 2016 (the most current version of the policy at the beginning of this review). GAO found that the 21 programs' baseline cost and performance goals generally traced to source documents, such as life-cycle cost estimates and planned performance outcomes. However, schedule goals did not generally match up to the programs' integrated master schedules (IMS), as required by DHS acquisition management instruction and as a best practice identified in GAO's Schedule Assessment Guide (see figure). The lack of traceability between IMSs and schedule goals in the approved acquisition program baselines (APB) indicates that DHS does not have appropriate oversight processes in place to ensure that schedules are accurately reflected in program baselines, in accordance with DHS policy and GAO's best practices. Therefore, DHS cannot ensure that the understanding of program schedules among different stakeholders, including component and DHS leadership is consistent and accurate. As a result, DHS leadership may be approving program schedule goals that do not align with program execution plans. What GAO Recommends GAO is making two recommendations, including that DHS put in place an oversight process to ensure that programs' schedule goals are developed and updated according to GAO's scheduling best practices. DHS concurred with GAO's recommendations.